Use rtkit to get content from tickets in Request Tracker - python

i'm trying to get some contents from tickets with REST api in Ubuntu 16.04 and i'm having truble getting that content using the next code :
from rtkit.resource import RTResource
from rtkit.authenticators import QueryStringAuthenticator
from rtkit.errors import RTResourceError
from rtkit import set_logging
import logging
import re
set_logging('debug')
logger = logging.getLogger('rtkit')
resource = RTResource('http://ubuntu/rt/REST/1.0/', 'root', '**passwd**', QueryStringAuthenticator)
try:
response = resource.get(path='ticket/2')
myTicket = response.as_object() ## Returns an RtObj instance
except RTResourceError as e:
logger.error(e.response.status_int)
logger.error(e.response.status)
logger.error(e.response.parsed)
And the terminal is giving this error:
File "LoginQuery.py", line 85, in <module>
myTicket = response.as_object() ## Returns an RtObj instance
AttributeError: 'RTResponse' object has no attribute 'as_object'
Did someone had this problem too?? and know how to solve it??
Help :)

according to the package documentation it seems like the proper way to read the response is to use response.parsed:
try:
response = resource.get(path='ticket/1')
for r in response.parsed:
for t in r:
logger.info(t)
except RTResourceError as e:
logger.error(e.response.status_int)
logger.error(e.response.status)
logger.error(e.response.parsed)

Yes, but i was trying to get the information from the contents separately... and some hours latter i cam with this :
try:
response = resource.get(path='ticket/2')
Ticket = response.parsed
Criation = Ticket[0][12][1]
This allow me to get de date when it was created

Related

Why does this python script work on my local machine but not on Heroku?

there. I'm building a simple scraping tool. Here's the code that I have for it.
from bs4 import BeautifulSoup
import requests
from lxml import html
import gspread
from oauth2client.service_account import ServiceAccountCredentials
import datetime
scope = ['https://spreadsheets.google.com/feeds']
credentials = ServiceAccountCredentials.from_json_keyfile_name('Programming
4 Marketers-File-goes-here.json', scope)
site = 'http://nathanbarry.com/authority/'
hdr = {'User-Agent':'Mozilla/5.0'}
req = requests.get(site, headers=hdr)
soup = BeautifulSoup(req.content)
def getFullPrice(soup):
divs = soup.find_all('div', id='complete-package')
price = ""
for i in divs:
price = i.a
completePrice = (str(price).split('$',1)[1]).split('<', 1)[0]
return completePrice
def getVideoPrice(soup):
divs = soup.find_all('div', id='video-package')
price = ""
for i in divs:
price = i.a
videoPrice = (str(price).split('$',1)[1]).split('<', 1)[0]
return videoPrice
fullPrice = getFullPrice(soup)
videoPrice = getVideoPrice(soup)
date = datetime.date.today()
gc = gspread.authorize(credentials)
wks = gc.open("Authority Tracking").sheet1
row = len(wks.col_values(1))+1
wks.update_cell(row, 1, date)
wks.update_cell(row, 2, fullPrice)
wks.update_cell(row, 3, videoPrice)
This script runs on my local machine. But, when I deploy it as a part of an app to Heroku and try to run it, I get the following error:
Traceback (most recent call last):
File "/app/.heroku/python/lib/python3.6/site-packages/gspread/client.py", line 219, in put_feed
r = self.session.put(url, data, headers=headers)
File "/app/.heroku/python/lib/python3.6/site-packages/gspread/httpsession.py", line 82, in put
return self.request('PUT', url, params=params, data=data, **kwargs)
File "/app/.heroku/python/lib/python3.6/site-packages/gspread/httpsession.py", line 69, in request
response.status_code, response.content))
gspread.exceptions.RequestError: (400, "400: b'Invalid query parameter value for cell_id.'")
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "AuthorityScraper.py", line 44, in
wks.update_cell(row, 1, date)
File "/app/.heroku/python/lib/python3.6/site-packages/gspread/models.py", line 517, in update_cell
self.client.put_feed(uri, ElementTree.tostring(feed))
File "/app/.heroku/python/lib/python3.6/site-packages/gspread/client.py", line 221, in put_feed
if ex[0] == 403:
TypeError: 'RequestError' object does not support indexing
What do you think might be causing this error? Do you have any suggestions for how I can fix it?
There are a couple of things going on:
1) The Google Sheets API returned an error: "Invalid query parameter value for cell_id":
gspread.exceptions.RequestError: (400, "400: b'Invalid query parameter value for cell_id.'")
2) A bug in gspread caused an exception upon receipt of the error:
TypeError: 'RequestError' object does not support indexing
Python 3 removed __getitem__ from BaseException, which this gspread error handling relies on. This doesn't matter too much because it would have raised an UpdateCellError exception anyways.
My guess is that you are passing an invalid row number to update_cell. It would be helpful to add some debug logging to your script to show, for example, which row it is trying to update.
It may be better to start with a worksheet with zero rows and use append_row instead. However there does seem to be an outstanding issue in gspread with append_row, and it may actually be the same issue you are running into.
I encountered the same problem. BS4 works fine at a local machine. However, for some reason, it is way too slow in the Heroku server resulting into giving error.
I switched to lxml and it is working fine now.
Install it by command:
pip install lxml
A sample code snippet is given below:
from lxml import html
import requests
getpage = requests.get("https://url_here")
gethtmlcontent = html.fromstring(getpage.content)
data = gethtmlcontent.xpath('//div[#class = "class-name"]/text()')
#this is a sample for fetching data from the dummy div
data = data[0:n] # as per your requirement
#now inject the data into django tmeplate.

Python “TypeError: 'builtin_function_or_method' object is not subscriptable"

It looks like sth error in it, but i failed to find it.
from urllib.request import Request, urlopen
from urllib.error import URLError,HTTPError
from bs4 import BeautifulSoup
import re
print('https://v.qq.com/x/page/h03425k44l2.html\\\\n\\\\https://v.qq.com/x/cover/dn7fdvf2q62wfka/m0345brcwdk.html\\\\n\\\\http://v.qq.com/cover/2/2iqrhqekbtgwp1s.html?vid=c01350046ds')
web = input('请输入网址:')
if re.search(r'vid=',web) :
patten =re.compile(r'vid=(.*)')
vid=patten.findall(web)
vid=vid[0]
else:
newurl = (web.split("/")[-1])
vid =newurl.replace('.html', ' ')
#从视频页面找出vid
getinfo='http://vv.video.qq.com/getinfo?vids{vid}&otype=xlm&defaultfmt=fhd'.format(vid=vid.strip())
def getpage(url):
req = Request(url)
user_agent = 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit'
req.add_header('User-Agent', user_agent)
try:
response = urlopen(url)
except HTTPError as e:
print('The server couldn\\\'t fulfill the request.')
print('Error code:', e.code)
except URLError as e:
print('We failed to reach a server.')
print('Reason:', e.reason)
html = response.read().decode('utf-8')
return(html)
#打开网页的函数
a = getpage(getinfo)
soup = BeautifulSoup(a, "html.parser")
for e1 in soup.find_all('url'):
ippattent = re.compile(r"((?:(2[0-4]\\\\d)|(25[0-5])|([01]\\\\d\\\\d?))\\\\.){3}(?:(2[0-4]\\\\d)|(255[0-5])|([01]?\\\\d\\\\d?))")
if re.search(ippattent,e1.get_text()):
ip=(e1.get_text())
for e2 in soup.find_all('id'):
idpattent = re.compile(r"\\\\d{5}")
if re.search(idpattent,e2.get_text()):
id=(e2.get_text())
filename=vid.strip()+'.p'+id[2:]+'.1.mp4'
#找到ID和拼接FILENAME
getkey='http://vv.video.qq.com/getkey?format={id}&otype=xml&vt=150&vid{vid}&ran=0%2E9477521511726081\\\\&charge=0&filename={filename}&platform=11'.format(id=id,vid=vid.strip(),filename=filename)
#利用getinfo中的信息拼接getkey网址
b = getpage(getkey)
key=(re.findall(r'<key>(.*)<\\\\/key>',b))
videourl=ip+filename+'?'+'vkey='+key[0]
print('视频播放地址 '+videourl)
#完成了
I run it and get this:
Traceback (most recent call last):
File "C:\Users\DYZ_TOGA\Desktop\qq.py", line 46, in <module>
filename=vid.strip()+'.p'+id[2:]+'.1.mp4'
TypeError: 'builtin_function_or_method' object is not subscriptable
What should I do? I don't know how to change my code to correct it.
The root of your problem is here:
if re.search(idpattent,e2.get_text()):
id=(e2.get_text())
If this is false, you never set id. And that means id is the built-in function of that name, which gets the unique ID of any object. Since it's a function, not the string you expect, you can't do this:
id[2:]
Hence the error you are getting.
My suggestions are:
Use a different variable name; you would have get an error about it not being defined in this case, which would have made solving the problem easier
When you don't find the ID, don't continue the script; it won't work anyway. If you expected to find it, and are not sure why that's not happening, that's a different question you should ask separately.
id is a builtin function in python. it seems you are using the same to store variable. It is bad habit to use keyword as variable name. Use some different name instead.
if re.search(idpattent,e2.get_text()):
id=(e2.get_text())
filename=vid.strip()+'.p'+id[2:]+'.1.mp4'
If the above "if" is not true, id will not be set to string value.
By default id is a function is python . So you cannot do id[2:]
because python expects id().

How to download photos from Flickr by Flickr API in Python 3

noob question series...
I am a new learner of python, recently want to create a small python application that can collect photos from flickr based on different search input. (eg: if i input "dog", it will download all dog images from flickr)
I did some research online and notice that flickr API might be the best way and the method flickr.photos.getSizes should be the one I need to use.
However, I have few stupid questions when coding:
I have applied my key and secret for flickr API, I just don't know what to do next with flickr.photos.getSizes in python to download photos. Like, how to call this method in python? (and I noticed required arguments for this method are keys and photo_id, how to get photo_ids based on search input "dog")
Then I followed a tutorial from https://github.com/alexis-mignon/python-flickr-api/wiki/Tutorial but when I imported flickr_api I got error message:
Could not load all modules
<class 'ImportError'> No module named 'objects'
Traceback (most recent call last):
File "D:/Agfa/Projects/Image/flickr.py", line 2, in <module>
import flickr_api
File "D:\Application\Anaconda3\lib\site-packages\flickr_api\__init__.py", line 32, in <module>
from auth import set_auth_handler
ImportError: cannot import name 'set_auth_handler'
Then I took a look at the _ init _.py:
try:
from objects import *
import objects
import upload as Upload
from upload import upload, replace
except Exception as e:
print "Could not load all modules"
print type(e), e
from auth import set_auth_handler
from method_call import enable_cache, disable_cache
from keys import set_keys
from _version import __version__
Seems like this library does not support python 3 but I don't know what to do. (I cannot install methond_call, keys, _version on my python 3) guess I will use flickrapi
Thank you so much for your time and again thanks in advance.
I think I finally got the proper way to use FlickrAPI:
there are many ways but I figured out 2:
def flickr_walk(keyward):
count = 0
photos = flickr.walk(text=keyward,
tag_mode='all',
tags=keyward,
extras='url_c',
per_page=100)
for photo in photos:
try:
url=photo.get('url_c')
urllib.request.urlretrieve(url, path+'\\' + str(count) +".jpg")
except Exception as e:
print('failed to download image')
flickr.walk uses Photos.search API, I can use the API directly as well:
def flickr_search(keyward):
obj = flickr.photos.search(text=keyward,
tags=keyward,
extras='url_c',
per_page=5)
for photo in obj:
url=photo.get('url_c')
photos = ET.dump(obj)
print (photos)
Remember to get the key and secret first:
api_key = 'xxxxxxxxxxxxxxxx'
api_secret = 'xxxxxxxxxxxxx'
flickr=flickrapi.FlickrAPI(api_key,api_secret,cache=True)
I dont have any clue on the why/how. If you want to use the flickr_api module with python3.5+, you need to fix the Imports, like I did below:
try:
from objects import *
import objects
import upload as Upload
from upload import upload, replace
except Exception as e:
#print "Could not load all modules"
print( type(e), e)
from .auth import set_auth_handler
from .method_call import enable_cache, disable_cache
from .keys import set_keys
from ._version import __version__
After this edit, it fails with another Import Error on:
>>> import flickr_api
<class 'SyntaxError'> invalid syntax (method_call.py, line 50)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/krysopath/.local/lib/python3.5/site-packages/flickr_api/__init__.py", line 32, in <module>
from .auth import set_auth_handler
File "/home/krysopath/.local/lib/python3.5/site-packages/flickr_api/auth.py", line 43, in <module>
import urlparse
ImportError: No module named 'urlparse'
So you can fix this by yourself, if you want to, by just walking along the Import Errors and adding a dot to convert them into absolute Imports, that dont fail.
I guess, if you want to use this modul you have to fix it first... and have an unknown return. So if you didnt already invested heavily, it might be more effective to use that other module.

TypeError: 'NoneType' object has no attribute '__getitem__' in python code which uses google API

i have this code which searches for a word in google using google API, but for once it works fine but if i add many words or if i run it many times i keep getting the following error...
results = jsonResponse['responseData']['results']
TypeError: 'NoneType' object has no attribute '__getitem__'
i tried searching a lot on google but couldnt know what the issue is.. can anyone please help me knowing the issue and how to handle it... was struggling with this error
import urllib
import urllib2
from urllib import urlencode
import json as m_json
from urllib2 import urlopen
import re
import json
from nltk.corpus import stopwords
import sys
from urllib2 import urlopen
import urllib2
import simplejson
import pprint
words = ['headache','diabetes','myopia','dhaed','snow','blindness','head','ache','acne','aids','blindness','head','ache','acne','aids','blindness','head','ache','acne','aids']
for word in words:
url = ('https://ajax.googleapis.com/ajax/services/search/web'
'?v=1.0&q='+word+'&userip=192.168.1.105')
request = urllib2.Request(url)
response = urllib2.urlopen(request)
jsonResponse=json.loads(response.read())
#print "the response now is: ",jsonResponse
#pprint.pprint(jsonResponse)
results = jsonResponse['responseData']['results']
for result in results:
print "\nthe result is: ",result
url =result['url']
print "\nthe url is: ",url
try:
page=urllib2.urlopen(url).read()
except urllib2.HTTPError,err:
if err.code == 403:
print "bad"
continue
else:
print "good"
break
except urllib2.HTTPError:
print "server error"
except:
print "dont know the error"
thanks is advance..
Chances are that when there are no results, jsonResponse['responseData'] is None so it has no property named results in the results or responseData itself is None (== JSON null). (The dictionary lookup fails, either for jsonResponse or jsonResponse['responseData'] being null/None.
Dump the output when that error happens to see which is None and then add a check for it before the line results = jsonResponse['responseData']['results'].
Aneroid
is correct about the response data.
One possible solution to handle this:
responseData = jsonResponse['responseData']
if responseData is not None:
results = responseData['results']
for results in results:
# your code
else:
print "No Response"

How do I search for text in a page using regular expressions in Python?

I'm trying to create a simple module for phenny, a simple IRC bot framework in Python. The module is supposed to go to http://www.isup.me/websitetheuserrequested to check is a website was up or down. I assumed I could use regex for the module seeing as other built-in modules use it too, so I tried creating this simple script although I don't think I did it right.
import re, urllib
import web
isupuri = 'http://www.isup.me/%s'
check = re.compile(r'(?ims)<span class="body">.*?</span>')
def isup(phenny, input):
global isupuri
global cleanup
bytes = web.get(isupuri)
quote = check.findall(bytes)
result = re.sub(r'<[^>]*?>', '', str(quote[0]))
phenny.say(result)
isup.commands = ['isup']
isup.priority = 'low'
isup.example = '.isup google.com'
It imports the required web packages (I think), and defines the string and the text to look for within the page. I really don't know what I did in those four lines, I kinda just ripped the code off another phenny module.
Here is an example of a quotes module that grabs a random quote from some webpage, I kinda tried to use that as a base: http://pastebin.com/vs5ypHZy
Does anyone know what I am doing wrong? If something needs clarified I can tell you, I don't think I explained this enough.
Here is the error I get:
Traceback (most recent call last):
File "C:\phenny\bot.py", line 189, in call
try: func(phenny, input)
File "C:\phenny\modules\isup.py", line 18, in isup
result = re.sub(r'<[^>]*?>', '', str(quote[0]))
IndexError: list index out of range
try this (from http://docs.python.org/release/2.6.7/library/httplib.html#examples):
import httplib
conn = httplib.HTTPConnection("www.python.org")
conn.request("HEAD","/index.html")
res = conn.getresponse()
if res.status >= 200 and res.status < 300:
print "up"
else:
print "down"
You will also need to add code to follow redirects before checking the response status.
edit
Alternative that does not need to handle redirects but uses exceptions for logic:
import urllib2
request = urllib2.Request('http://google.com')
request.get_method = lambda : 'HEAD'
try:
response = urllib2.urlopen(request)
print "up"
print response.code
except urllib2.URLError, e:
# failure
print "down"
print e
You should do your own tests and choose the best one.
The error means your regexp wasn't found anywhere on the page (the list quote has no element 0).

Categories