I am mapping a SharePoint document library using office365-rest-client. My intention is to make a dictionary of the form:
file_dict = {File.serverRelativeUrl: [file_attribute_1, file_attribute_2, ...]}
Where File.serverRealtiveUrl is a string, and one of the above-mentioned attributes is to be the name of the latest person to modify the file.
The File class (seen here) also has a method modified_by() that I have been trying to use to determine the name of the person who last modified the file. However, using this returns an instance of the User class (seen here).
When looking at the code behind User, it doesn't appear to contain any method that would allow for the name of the modifier to be retrieved.
When looking at the files saved within my SharePoint document library, it is clear that the names of these users are present:
Therefore, I would like to know the following:
Does anybody know of the correct method to determine the names of the file modifiers (if one exists)?
Alternatively, is it possible to determine the email addresses of these users instead?
I have already attempted to use the File.properties attribute, but have found that the modifier name / mailing address is not included within these properties:
import json
# Get file
print(json.dumps(file.properties))
Related
I have an Item object obtained by filtering on an account using exchangelib in python 3.7. It is an email object. I need to find the parent folder name of this item. I specifically need the name field of the folder(This is for tracking where specific emails are moved in a mailbox).
I can see a field parent_folder_id in the item object which returns what I think is a valid folder id. This is also for a production mailbox where account.root.get_folder(folder_id=idObj) times out due to Exchange settings which I cannot change. Pretty much any request which caches fails with a timeout.
account=Account(...)
mailItems=account.inbox.all().filter(subject="foo")
print([i.parent_folder_id.id for i in mailItems])
This prints a list of folder ids. I need the names of these folders. Unclear how to proceed. Any help would be appreciated
Since you're only searching account.inbox and not its subfolders, parent_folder_id will always point to account.inbox.
There's not a very good API yet for looking up folders by e.g. ID. The best solution currently is to use a folder QuerySet:
from exchangelib.folders import SingleFolderQuerySet
f = SingleFolderQuerySet(
account=account,
folder=account.root
).get(id=i.parent_folder_id.id)
I am trying for my code to pull a file when only a portion of the file name changes.
Example: I want to pull the file named JEFF_1234.csv where 1234 is an input from a GUI window.
The reason for the file name to have this structure is I want to have one main database that has multiple files for a specific part number. So if the user inputs a part number of 1234 and that will point to 4 files. JEFF_1234.csv, SAM_1234.csv, FRED_1234.csv and JACK_1234.csv.
What you need is a way to update a template with some dynamic value.
A neat way to do this is to define a template string using curly brackets as place-holders for the content that will be generated at runtime.
jefffile_template = "JEFF_{t}.csv"
Then, once you've assigned a value to the unknown pointer, you can convert your template into an appropriate string:
jeff_filename = jefffile_template.format(t="1234")
Which will store the value of JEFF_1234.csv into the variable jeff_filename for usage later in your program.
There are other similar ways of calling formatting functions, but using this by name style is my preferred method.
...and in case you're wondering, yes, this was still valid in 2.7.
I am trying to execute this following code to push data to Salesforce using the simple_salesforce python library :
from simple_salesforce import Salesforce
staging_df = hive.execute("select * from hdmni")
staging_df = staging_df.toPandas()
# # staging_df['birth_date']= staging_df['birth_date'].dt.date
staging_df['birth_date'] = staging_df['birth_date'].astype(str)
staging_df['encounter_start_date'] = staging_df['encounter_start_date'].astype(str)
staging_df['encounter_end_date'] = staging_df['encounter_end_date'].astype(str)
bulk_data = []
for row in staging_df.itertuples():
d= row._asdict()
del d['Index']
bulk_data.append(d)
sf = Salesforce(password='', username='', security_token='')
sf.bulk.Delivery_Detail__c.insert(bulk_data)
I am getting this error while trying to send dictionary to salesforce :
SalesforceMalformedRequest: Malformed request
https://subhotutorial-dev-ed.my.salesforce.com/services/async/38.0/job/7500o00000HtWP6AAN/batch/7510o00000Q15TnAAJ/result.
Response content: {'exceptionCode': 'InvalidBatch',
'exceptionMessage': 'Records not processed'}
There's something about your query that is not correct. While I don't know your use case, by reading this line, you can tell that you are attempting to insert into a custom object/entity in Salesforce:
sf.bulk.Delivery_Detail__c.insert(bulk_data)
The reason you can tell is because of the __c suffix, which gets appended onto custom objects and fields (that's two underscores, by the way).
Since you're inserting into a custom object, your fields would have to be custom, too. And note, you've not appended that suffix onto them.
Note: Every custom object/entity in Salesforce does come with a few standard fields to support system features like record key (Id), record name (Name), audit fields (CreatedById, CreatedDate, etc.). These wouldn't have a suffix. But none of the fields you reference are any of these standard system fields...so the __c suffix would be expected.
I suspect that what Salesforce is expecting in your insert operation are field names like this:
Birth_Date__c
Encounter_Start_Date__c
Encounter_End_Date__c
These are referred to as the API name for both objects and fields, and anytime code interacts with them (whether via integration, or on code that executes directly on the Salesforce platform) you need to make certain you're using this API name.
Incidentally, you can retrieve this API name through a number of ways. Probably easiest is to log into your Salesforce org, and in Setup > Object Manager > [some object] > Fields and Relationships you can view details of each field, including the API name. Here's a screen shot.
You can also use SObject describe APIs, either in native Apex code, or via integration and either the REST or SOAP APIs. Here's part of the response from the describe API request to the describe REST endpoint for the same object as my UI example above, found here at https://[domain]/services/data/v47.0/sobjects/Expense__c/describe:
Looking at the docs for the simple-salesforce python library you're using, they've surfaced the describe API. You can find some info under Other Options. You would invoke it as sf.SObject.describe where "SObject" is the actual object you want to find the information about. For instance, in your case you would use:
sf.Delivery_Detail__c.describe()
As a good first troubleshooting step when interacting with a Salesforce object, I'd always recommend double-checking correctly referencing an API name. I can't tell you how many times I've bumped into little things like adding or missing an underscore. Especially with the __c suffix.
I am building an HDF5 file using PyTables python package. The file would be updated everyday with latest tick data. I want to create two groups - Quotes and Trades and tables for different futures expiries. I want to check if the group Quotes exists or not and if not then create it. What is the best way to do it in PyTables?
Here is a code snippet of where I am right now:
hdf_repos_filters = tables.Filters(complevel=1, complib='zlib')
for instrument in instruments:
if options.verbose:
hdf_file = os.path.join(dest_path, "{}.h5".format(instrument))
store = tables.open_file(hdf_file, mode='a', filters=hdf_repos_filters)
# This is where I want to check whether the group "Quotes" and "Trades" exist and if not create it
Kapil is on the right track in that you want to use the __contains__ method, although because it is a double underscore method it is not intended to be called directly rather through an alternate interface. In this case that interface is in. So to check a file hdf_file contains a group "Quotes" you can run:
with tables.open_file(hdf_file) as store:
if "/Quotes" in store:
print(f"Quotes already exists in the file {hdf_file}")
I think I have figured it out.
I am using the File.__contains__(path) method in the File class in PyTables.
As per the documentation:
File.__contains__(path)
Is there a node with that path?
Returns True if the file has a node with the given path (a string), False otherwise.
PyTables File class
I'm creating HTML from reST using the rst2html tool which comes with docutils. It seems that the code already assigns id attributes to the individual sections, which can be used as fragment identifiers in a URL, i.e. as anchors to jump to a specific part of the page. Those id values are based on the text of the section headline. When I change the wording of that headline, the identifier will change as well, rendering old URLs invalid.
Is there a way to specify the name to use as an identifier for a given section, so that I can edit the headline without invalidating links? Would there be a way if I were to call the docutils publisher myself, from my own script?
I don't think you can set an explicit id in reST sections, but I could be mistaken.
If you'd rather have numbered ids, which will depend on the ordering of the sections in the document tree, rather than their titles, you can do it with a small change to document.set_id() method in docutils/nodes.py (at line 997 on my version.)
Here is the patch:
def set_id(self, node, msgnode=None):
for id in node['ids']:
if id in self.ids and self.ids[id] is not node:
msg = self.reporter.severe('Duplicate ID: "%s".' % id)
if msgnode != None:
msgnode += msg
if not node['ids']:
- for name in node['names']:
- id = self.settings.id_prefix + make_id(name)
- if id and id not in self.ids:
- break
- else:
+ if True: #forcing numeric ids
id = ''
while not id or id in self.ids:
id = (self.settings.id_prefix +
self.settings.auto_id_prefix + str(self.id_start))
self.id_start += 1
node['ids'].append(id)
self.ids[id] = node
return id
I just tested it and it generates the section ids as id1, id2...
If you don't want to change this system-wide file, you can probably monkey-patch it from a custom rst2html command.
I'm not sure if I really understand your question.
You can create explicit hyperlink targets to arbitrary locations in your document which can be used to reference these locations independent of the implicit hyperlink targets created by docutils:
.. _my_rstfile:
------------------
This is my rstfile
------------------
.. _a-section:
First Chapter
-------------
This a link to a-section_ which is located in my_rstfile_.
As it seems that you want to create links between multiple rst files I would however advise to use Sphinx as it can handle references to arbitrary locations between different files and has some more advantages, like a toctree and theming. You can use sphinx not only for source code documentation, but for general text processing. Something like an example is the Sphinx documentation itself (there are hundreds of other examples on readthedocs).
Invoking Sphinx should be simple using sphinx-quickstart. You can simply add your exiting rst-files to the toctree in index.rst and run make html. If you want to document python code you can use sphinx-apidoc which will automatically generate an API documentation.
I made a Sphinx extension to solve this problem. The extension takes the preceding internal target and uses that as the section's ID. Example (from bmu's answer):
.. _a-section:
First Chapter
-------------
The permalink on "First Chapter" would point to #a-section instead of #first-chapter. If there's multiple, it'll take the last one.
Link to the extension: https://github.com/GeeTransit/sphinx-better-subsection