I'm trying to update a note in Evernote.
I set a filter, get notes list and I can also change the note's title.
But when I try to change note content, nothing happens.
from evernote.api.client import EvernoteClient
import evernote.edam.type.ttypes as Types
from evernote.edam.notestore.ttypes import NoteFilter, NotesMetadataResultSpec
client = EvernoteClient(token="xxxxx", sandbox=True)
note_store = client.get_note_store()
updated_filter = NoteFilter(words='abaco')
result_list = note_store.findNotesMetadata(updated_filter, 0, 10000, NotesMetadataResultSpec(includeTitle=True))
for note in result_list.notes:
print "----- TITLE -----\n%s\n----- GUID -----\n%s\n----- CONTENT -----\n%s" % (note.title, note.guid, note_store.getNoteContent(note.guid))
note.title = "pippo"
note.guid = note.guid
note.content = '<?xml version="1.0" encoding="UTF-8"?><!DOCTYPE en-note SYSTEM "http://xml.evernote.com/pub/enml2.dtd">'
note.content += '<en-note>Note updated</en-note>'
note = note_store.updateNote(note)
I receive no error but the note is not updated.
I'm using Python 2.7.
Thanks in advance!
The return value of NoteStore#findNotesMetadata is NotesMetadataList that contains NoteMetadata, not Note object. In order to update notes, you should call NoteStore#getNote first, update the field and call NoteStore#updateNote.
Related
I've been using the list_aliases() method of KMS client since a while now without any issues. But recently it has stopped listing one of the alias names I want to use.
import boto3
kms_client = boto3.client('kms')
# Getting all the aliases from my KMS
key_aliases = kms_client.list_aliases()
key_aliases = key_aliases['Aliases']
# DO SOMETHING...
The key_aliases list above contains all the keys except the one I want to use. However, I can see from the AWS KMS UI that the key is enabled. Not sure why the list_aliases() method is not returning it.
Has anyone faced this problem?
It looks like the response is truncated. The default number of aliases fetched by this API call is 50. You can increase the limit up to 100, which should solve your problem.
key_aliases = kms_client.list_aliases(Limit=100)
You should also check if the truncated field in the response is set to True. In that case, you can just make another API call to fetch the remaining results:
if key_aliases['Truncated'] is True:
key_aliases = kms_client.list_aliases(Marker=key_aliases['NextMarker'])
...
def get_keys_arn(kmsclient,key_name):
#Marker = 'string'
alias_list = kmsclient.list_aliases(Limit=999)
if alias_list['Truncated'] is True:
alias_list_trun = alias_list['Aliases']
for alias in alias_list_trun:
if alias["AliasName"] == "alias/" + key_name:
return alias["TargetKeyId"]
while alias_list['Truncated'] :
alias_list = kmsclient.list_aliases(Limit=999,Marker=alias_list['NextMarker'])
alias_list_trun = alias_list['Aliases']
for alias in alias_list_trun:
if alias["AliasName"] == "alias/" + key_name:
return alias["TargetKeyId"]
else:
alias_list= alias_list['Aliases']
for alias in alias_list:
if alias["AliasName"] == "alias/" + key_name:
return alias["TargetKeyId"]
I am getting JIRA data using the following python code,
how do I store the response for more than one key (my example shows only one KEY but in general I get lot of data) and print only the values corresponding to total,key, customfield_12830, summary
import requests
import json
import logging
import datetime
import base64
import urllib
serverURL = 'https://jira-stability-tools.company.com/jira'
user = 'username'
password = 'password'
query = 'project = PROJECTNAME AND "Build Info" ~ BUILDNAME AND assignee=ASSIGNEENAME'
jql = '/rest/api/2/search?jql=%s' % urllib.quote(query)
response = requests.get(serverURL + jql,verify=False,auth=(user, password))
print response.json()
response.json() OUTPUT:-
http://pastebin.com/h8R4QMgB
From the the link you pasted to pastebin and from the json that I saw, its a you issues as list containing key, fields(which holds custom fields), self, id, expand.
You can simply iterate through this response and extract values for keys you want. You can go like.
data = response.json()
issues = data.get('issues', list())
x = list()
for issue in issues:
temp = {
'key': issue['key'],
'customfield': issue['fields']['customfield_12830'],
'total': issue['fields']['progress']['total']
}
x.append(temp)
print(x)
x is list of dictionaries containing the data for fields you mentioned. Let me know if I have been unclear somewhere or what I have given is not what you are looking for.
PS: It is always advisable to use dict.get('keyname', None) to get values as you can always put a default value if key is not found. For this solution I didn't do it as I just wanted to provide approach.
Update: In the comments you(OP) mentioned that it gives attributerror.Try this code
data = response.json()
issues = data.get('issues', list())
x = list()
for issue in issues:
temp = dict()
key = issue.get('key', None)
if key:
temp['key'] = key
fields = issue.get('fields', None)
if fields:
customfield = fields.get('customfield_12830', None)
temp['customfield'] = customfield
progress = fields.get('progress', None)
if progress:
total = progress.get('total', None)
temp['total'] = total
x.append(temp)
print(x)
I have s special xml file like below:
<alarm-dictionary source="DDD" type="ProxyComponent">
<alarm code="402" severity="Alarm" name="DDM_Alarm_402">
<message>Database memory usage low threshold crossed</message>
<description>dnKinds = database
type = quality_of_service
perceived_severity = minor
probable_cause = thresholdCrossed
additional_text = Database memory usage low threshold crossed
</description>
</alarm>
...
</alarm-dictionary>
I know in python, I can get the "alarm code", "severity" in tag alarm by:
for alarm_tag in dom.getElementsByTagName('alarm'):
if alarm_tag.hasAttribute('code'):
alarmcode = str(alarm_tag.getAttribute('code'))
And I can get the text in tag message like below:
for messages_tag in dom.getElementsByTagName('message'):
messages = ""
for message_tag in messages_tag.childNodes:
if message_tag.nodeType in (message_tag.TEXT_NODE, message_tag.CDATA_SECTION_NODE):
messages += message_tag.data
But I also want to get the value like dnkind(database), type(quality_of_service), perceived_severity(thresholdCrossed) and probable_cause(Database memory usage low threshold crossed
) in tag description.
That is, I also want to parse the content in the tag in xml.
Could anyone help me with this?
Thanks a lot!
Once you have the text from the description tag, it's nothing to do with XML parsing. You just need do simple string-parsing to get the type = quality_of_service keys/values strings into something nicer to use in Python like a dictionary
With some slightly simpler parsing thanks to ElementTree, it would look like this
messages = """
<alarm-dictionary source="DDD" type="ProxyComponent">
<alarm code="402" severity="Alarm" name="DDM_Alarm_402">
<message>Database memory usage low threshold crossed</message>
<description>dnKinds = database
type = quality_of_service
perceived_severity = minor
probable_cause = thresholdCrossed
additional_text = Database memory usage low threshold crossed
</description>
</alarm>
...
</alarm-dictionary>
"""
import xml.etree.ElementTree as ET
# Parse XML
tree = ET.fromstring(messages)
for alarm in tree.getchildren():
# Get code and severity
print alarm.get("code")
print alarm.get("severity")
# Grab description text
descr = alarm.find("description").text
# Parse "thing=other" into dict like {'thing': 'other'}
info = {}
for dl in descr.splitlines():
if len(dl.strip()) > 0:
key, _, value = dl.partition("=")
info[key.strip()] = value.strip()
print info
I'm not completely sure on Python, but after quick research.
Seeing as you can already get all of the content from the description tag in XML, can you not split by line breaks, and then split each line using the str.split() function on the equals signs to give you name / value separately?
e.g.
for messages_tag in dom.getElementsByTagName('message'):
messages = ""
for message_tag in messages_tag.childNodes:
if message_tag.nodeType in (message_tag.TEXT_NODE, message_tag.CDATA_SECTION_NODE):
messages += message_tag.data
tag = str.split('=');
tagName = tag[0]
tagValue = tag[1]
(I haven't taken into account splitting each line up and looping)
But that should get you on the right track :)
AFAIK there is no library to handle the text as DOM elements.
You can however (after you have the message in the message variable) do:
description = {}
messageParts = message.split("\n")
for part in messageParts:
descInfo = part.split("=")
description[descInfo[0].strip()] = descInfo[1].strip()
then you'll have inside description the information you need in the form of a key-value map.
You should also add error handling on my code...
I'm a bit new to Python dev -- I'm creating a larger project for some web scraping. I want to approach this as "Pythonically" as possible, and would appreciate some help with the project structure. Here's how I'm doing it now:
Basically, I have a base class for an object whose purpose is to go to a website and parse some specific data on it into its own array, jobs[]
minion.py
class minion:
# Empty getJobs() function to be defined by object pre-instantiation
def getJobs(self):
pass
# Constructor for a minion that requires site authorization
# Ex: minCity1 = minion('http://portal.com/somewhere', 'user', 'password')
# or minCity2 = minion('http://portal.com/somewhere')
def __init__(self, title, URL, user='', password=''):
self.title = title
self.URL = URL
self.user = user
self.password = password
self.jobs = []
if (user == '' and password == ''):
self.reqAuth = 0
else:
self.reqAuth = 1
def displayjobs(self):
for j in self.jobs:
j.display()
I'm going to have about 100 different data sources. The way I'm doing it now is to just create a separate module for each "Minion", which defines (and binds) a more tailored getJobs() function for that object
Example: minCity1.py
from minion import minion
from BeautifulSoup import BeautifulSoup
import urllib2
from job import job
# MINION CONFIG
minTitle = 'Some city'
minURL = 'http://www.somewebpage.gov/'
# Here we define a function that will be bound to this object's getJobs function
def getJobs(self):
page = urllib2.urlopen(self.URL)
soup = BeautifulSoup(page)
# For each row
for tr in soup.findAll('tr'):
tJob = job()
span = tr.findAll(['span', 'class="content"'])
# If row has 5 spans, pull data from span 2 and 3 ( [1] and [2] )
if len(span) == 5:
tJob.title = span[1].a.renderContents()
tJob.client = 'Some City'
tJob.source = minURL
tJob.due = span[2].div.renderContents().replace('<br />', '')
self.jobs.append(tJob)
# Don't forget to bind the function to the object!
minion.getJobs = getJobs
# Instantiate the object
mCity1 = minion(minTitle, minURL)
I also have a separate module which simply contains a list of all the instantiated minion objects (which I have to update each time I add one):
minions.py
from minion_City1 import mCity1
from minion_City2 import mCity2
from minion_City3 import mCity3
from minion_City4 import mCity4
minionList = [mCity1,
mCity2,
mCity3,
mCity4]
main.py references minionList for all of its activities for manipulating the aggregated data.
This seems a bit chaotic to me, and was hoping someone might be able to outline a more Pythonic approach.
Thank you, and sorry for the long post!
Instead of creating functions and assigning them to objects (or whatever minion is, I'm not really sure), you should definitely use classes instead. Then you'll have one class for each of your data sources.
If you want, you can even have these classes inherit from a common base class, but that isn't absolutely necessary.
With python-gdata 2.0.14, I used the following pieces of code to create and upload documents:
# To create a document
import gdata.docs
import gdata.docs.client
from gdata.data import MediaSource
gdClient = gdata.docs.client.DocsClient(source="my-app")
gdClient.ssl = True
gdClient.ClientLogin("login", "pa$$word", gdClient.source)
ms = MediaSource(file_path="temp.html", content_type="text/html")
entry = gdClient.Upload(ms, "document title")
print "uploaded, url is", entry.GetAlternateLink().href
and
# To update a document
entry.title.text = "updated title"
entry = gdClient.Update(entry, media_source=ms, force=True)
print "updated, url is", entry.GetAlternateLink().href
However, this code does no longer work with python-gdata 2.0.16 because DocsClient class does no more have Upload and Update functions.
I tried to use this
# Try to create a document
gdClient = gdata.docs.client.DocsClient(source="my-app")
gdClient.ssl = True
gdClient.ClientLogin("login", "pa$$word", gdClient.source)
ms = MediaSource(file_path="temp.html", content_type="text/html")
entry = gdata.docs.data.Resource(type=gdata.docs.data.DOCUMENT_LABEL, title="document title")
self.resource = gdClient.CreateResource(entry, media=ms)
… but I get this error:
gdata.client.Unauthorized: Unauthorized - Server responded with: 401, 'Token invalid'
Can anybody tell me where's my mistake and how should I use that new API?
P.S. The documentation hasn't been updated and still uses the old-style code.
I was having issues with this recently too. This worked for me:
import gdata.docs.data
import gdata.docs.client
client = gdata.docs.client.DocsClient(source='your-app')
client.api_version = "3"
client.ssl = True
client.ClientLogin("your#email.com", "password", client.source)
filePath = "/path/to/file"
newResource = gdata.docs.data.Resource(filePath, "document title")
media = gdata.data.MediaSource()
media.SetFileHandle(filePath, 'mime/type')
newDocument = client.CreateResource(newResource, create_uri=gdata.docs.client.RESOURCE_UPLOAD_URI, media=media)
Edit: Added the packages to import to avoid confusion