Under python2.7 and already installed the requirements.txt from the twitter-python Building section
It's the first time I'm jumping in, and following the basics steps to ensure is everything okay as described here in this link twitter-python Documentation section, I'm getting an error.
Here the typen at command line Python shell:
>>> api = twitter.Api(consumer_key='consumer_key',
consumer_secret='consumer_secret',
access_token_key='access_token',
access_token_secret='access_token_secret')
The error:
>>> print api.VerifyCredentials()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ubuntu/.virtualenvs/twitter/local/lib/python2.7/site- packages/python_twitter-1.2-py2.7.egg/twitter.py", line 5209, in VerifyCredentials
data = self._ParseAndCheckTwitter(json.content)
File "/home/ubuntu/.virtualenvs/twitter/local/lib/python2.7/site-packages/python_twitter-1.2-py2.7.egg/twitter.py", line 5462, in _ParseAndCheckTwitter
self._CheckForTwitterError(data)
File "/home/ubuntu/.virtualenvs/twitter/local/lib/python2.7/site-packages/python_twitter-1.2-py2.7.egg/twitter.py", line 5487, in _CheckForTwitterError
raise TwitterError(data['errors'])
twitter.TwitterError: [{u'message': u'Invalid or expired token', u'code': 89}]
What could I be missing here?
You need to replace 'consumer_key', 'consumer_secret', 'access_token', and 'access_token_secret' with their actual values. You can either put those values directly in the twitter.Api() call, or assign their values to variables:
>>> # all these values are just random, you'll need to use your own values
>>> c_key = '123456'
>>> c_secret = 'a88d098cd76'
>>> token = '98765'
>>> token_secret = 'ad98c63e87f00'
>>> api = twitter.Api(consumer_key=c_key,
consumer_secret=c_secret,
access_token_key=token,
access_token_secret=token_secret)
I'm not sure but it looks like you need a valid consumer_key, consumer_secret, access_token_key and access_token_secret. This will involve you setting up your own app on twitter.com and using the consumer keys/secrets and test access_token keys/secrets to get started.
Related
I'm making a python script using the module PlexAPI. My goal is to start a stream on the client Chrome. The movie to view has the key /library/metadata/1.
Documentation and sources of code:
playMedia documentation
Example 4 is used but changed to fit my requirements
I'm using the key parameter
from plexapi.server import PlexServer
baseurl = 'http://xxx.xxx.xxx.xxx:xxxxx'
token = 'xxxxx'
plex = PlexServer(baseurl, token)
client = plex.client("Chrome")
client.playMedia(key="/library/metadata/1")
This gives the following error:
Traceback (most recent call last):
File "start_stream.py", line 7, in <module>
client.playMedia(key="/library/metadata/1")
TypeError: playMedia() missing 1 required positional argument: 'media'
So I edit the file:
client.playMedia(key="/library/metadata/1")
#changed to
client.playMedia(key="/library/metadata/1", media="movie")
But then I get a different error:
Traceback (most recent call last):
File "start_stream.py", line 7, in <module>
client.playMedia(key="/library/metadata/1", media="movie")
File "/usr/local/lib/python3.8/dist-packages/plexapi/client.py", line 497, in playMedia
server_url = media._server._baseurl.split(':')
AttributeError: 'str' object has no attribute '_server'
I don't really understand what's going on. Can someone help?
I was able to make it work the following way:
from plexapi.server import PlexServer
baseurl = 'http://xxx.xxx.xxx.xxx:xxxxx'
token = 'xxxxx'
plex = PlexServer(baseurl, token)
key = '/library/metadata/1'
client = plex.client("Chrome")
media = plex.fetchItem(key)
client.playMedia(media)
there. I'm building a simple scraping tool. Here's the code that I have for it.
from bs4 import BeautifulSoup
import requests
from lxml import html
import gspread
from oauth2client.service_account import ServiceAccountCredentials
import datetime
scope = ['https://spreadsheets.google.com/feeds']
credentials = ServiceAccountCredentials.from_json_keyfile_name('Programming
4 Marketers-File-goes-here.json', scope)
site = 'http://nathanbarry.com/authority/'
hdr = {'User-Agent':'Mozilla/5.0'}
req = requests.get(site, headers=hdr)
soup = BeautifulSoup(req.content)
def getFullPrice(soup):
divs = soup.find_all('div', id='complete-package')
price = ""
for i in divs:
price = i.a
completePrice = (str(price).split('$',1)[1]).split('<', 1)[0]
return completePrice
def getVideoPrice(soup):
divs = soup.find_all('div', id='video-package')
price = ""
for i in divs:
price = i.a
videoPrice = (str(price).split('$',1)[1]).split('<', 1)[0]
return videoPrice
fullPrice = getFullPrice(soup)
videoPrice = getVideoPrice(soup)
date = datetime.date.today()
gc = gspread.authorize(credentials)
wks = gc.open("Authority Tracking").sheet1
row = len(wks.col_values(1))+1
wks.update_cell(row, 1, date)
wks.update_cell(row, 2, fullPrice)
wks.update_cell(row, 3, videoPrice)
This script runs on my local machine. But, when I deploy it as a part of an app to Heroku and try to run it, I get the following error:
Traceback (most recent call last):
File "/app/.heroku/python/lib/python3.6/site-packages/gspread/client.py", line 219, in put_feed
r = self.session.put(url, data, headers=headers)
File "/app/.heroku/python/lib/python3.6/site-packages/gspread/httpsession.py", line 82, in put
return self.request('PUT', url, params=params, data=data, **kwargs)
File "/app/.heroku/python/lib/python3.6/site-packages/gspread/httpsession.py", line 69, in request
response.status_code, response.content))
gspread.exceptions.RequestError: (400, "400: b'Invalid query parameter value for cell_id.'")
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "AuthorityScraper.py", line 44, in
wks.update_cell(row, 1, date)
File "/app/.heroku/python/lib/python3.6/site-packages/gspread/models.py", line 517, in update_cell
self.client.put_feed(uri, ElementTree.tostring(feed))
File "/app/.heroku/python/lib/python3.6/site-packages/gspread/client.py", line 221, in put_feed
if ex[0] == 403:
TypeError: 'RequestError' object does not support indexing
What do you think might be causing this error? Do you have any suggestions for how I can fix it?
There are a couple of things going on:
1) The Google Sheets API returned an error: "Invalid query parameter value for cell_id":
gspread.exceptions.RequestError: (400, "400: b'Invalid query parameter value for cell_id.'")
2) A bug in gspread caused an exception upon receipt of the error:
TypeError: 'RequestError' object does not support indexing
Python 3 removed __getitem__ from BaseException, which this gspread error handling relies on. This doesn't matter too much because it would have raised an UpdateCellError exception anyways.
My guess is that you are passing an invalid row number to update_cell. It would be helpful to add some debug logging to your script to show, for example, which row it is trying to update.
It may be better to start with a worksheet with zero rows and use append_row instead. However there does seem to be an outstanding issue in gspread with append_row, and it may actually be the same issue you are running into.
I encountered the same problem. BS4 works fine at a local machine. However, for some reason, it is way too slow in the Heroku server resulting into giving error.
I switched to lxml and it is working fine now.
Install it by command:
pip install lxml
A sample code snippet is given below:
from lxml import html
import requests
getpage = requests.get("https://url_here")
gethtmlcontent = html.fromstring(getpage.content)
data = gethtmlcontent.xpath('//div[#class = "class-name"]/text()')
#this is a sample for fetching data from the dummy div
data = data[0:n] # as per your requirement
#now inject the data into django tmeplate.
I am trying to use the Google Drive API to download publicly available files however whenever I try to proceed I get an import error.
For reference, I have successfully set up the OAuth2 such that I have a client id as well as a client secret , and a redirect url however when I try setting it up I get an error saying the object has no attribute urllen
>>> from apiclient.discovery import build
>>> from oauth2client.client import OAuth2WebServerFlow
>>> flow = OAuth2WebServerFlow(client_id='not_showing_client_id', client_secret='not_showing_secret_id', scope='https://www.googleapis.com/auth/drive', redirect_uri='https://www.example.com/oauth2callback')
>>> auth_uri = flow.step1_get_authorize_url()
>>> code = '4/E4h7XYQXXbVNMfOqA5QzF-7gGMagHSWm__KIH6GSSU4#'
>>> credentials = flow.step2_exchange(code)
And then I get the error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Python/2.7/site-packages/oauth2client/util.py", line
137, in positional_wrapper
return wrapped(*args, **kwargs)
File "/Library/Python/2.7/site-packages/oauth2client/client.py", line
1980, in step2_exchange
body = urllib.parse.urlencode(post_data)
AttributeError: 'Module_six_moves_urllib_parse' object has no attribute
'urlencode'
Any help would be appreciated, also would someone mind enlightening me as to how I instantiate a drive_file because according to https://developers.google.com/drive/web/manage-downloads, I need to instantiate one and I am unsure of how to do so.
Edit: So I figured out why I was getting the error I got before. If anyone else is having the same problem then try running.
sudo pip install -I google-api-python-client==1.3.2
However I am still unclear about the drive instance so any help with that would be appreciated.
Edit 2: Okay so I figured out the answer to my whole question. The drive instance is just the metadata which results when we use the API to search for a file based on its id
So as I said in my edits try the sudo pip install and a file instance is just a dictionary of meta data.
I want to open a connection to a ldap directory using ldap url that will be given at run time. For example :
ldap://192.168.2.151/dc=directory,dc=example,dc=com
It is valid as far as I can tell. Python-ldap url parser ldapurl.LDAPUrl accepts it.
url = 'ldap://192.168.2.151/dc=directory,dc=example,dc=com'
parsed_url = ldapurl.LDAPUrl(url)
parsed_url.dn
'dc=directory,dc=example,dc=com'
But if I use it to initialize a LDAPObject, I get a ldap.LDAPError exception
ldap.initialize(url)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/dist-packages/ldap/functions.py", line 91, in initialize
return LDAPObject(uri,trace_level,trace_file,trace_stack_limit)
File "/usr/lib/python2.7/dist-packages/ldap/ldapobject.py", line 70, in __init__
self._l = ldap.functions._ldap_function_call(ldap._ldap_module_lock,_ldap.initialize,uri)
File "/usr/lib/python2.7/dist-packages/ldap/functions.py", line 63, in _ldap_function_call
result = func(*args,**kwargs)
ldap.LDAPError: (0, 'Error')
I found that if I manually encode the dn part of the url, it works :
url = 'ldap://192.168.2.151/dc=directory%2cdc=example%2cdc=com'
#url still valid
parsed_url = ldapurl.LDAPUrl(url)
parsed_url.dn
'dc=directory,dc=example,dc=com'
#and will return a valid connection
ldap.initialize(url)
<ldap.ldapobject.SimpleLDAPObject instance at 0x1400098>
How can I ensure robust url handling in ldap.initialize without encoding parts of the url myself ? (which, I'm afraid, won't be that robust anyway).
You can programatically encode the last part of the URL:
from urllib import quote # works in Python 2.x
from urllib.parse import quote # works in Python 3.x
url = 'ldap://192.168.2.151/dc=directory,dc=paralint,dc=com'
idx = url.rindex('/') + 1
url[:idx] + quote(url[idx:], '=')
=> 'ldap://192.168.2.151/dc=directory%2Cdc=paralint%2Cdc=com'
One can use LDAPUrl.unparse() method to get a properly encoded version of the URI, like this :
>>> import ldapurl
>>> url = ldapurl.LDAPUrl('ldap://192.168.2.151/dc=directory,dc=example,dc=com')
>>> url.unparse()
'ldap://192.168.2.151/dc%3Ddirectory%2Cdc%3Dparalint%2Cdc%3Dcom???'
>>> ldap.initialize(url.unparse())
<ldap.ldapobject.SimpleLDAPObject instance at 0x103d998>
And LDAPUrl.unparse() will not reencode an already encoded url :
>>> url = ldapurl.LDAPUrl('ldap://example.com/dc%3Dusers%2Cdc%3Dexample%2Cdc%3Dcom%2F???')
>>> url.unparse()
'ldap://example.com/dc%3Dusers%2Cdc%3Dexample%2Cdc%3Dcom%2F???'
So you can use it blindly on any ldap uri your program must handle.
Hey guys, I am a little lost on how to get the auth token. Here is the code I am using on the return from authorizing my app:
client = gdata.service.GDataService()
gdata.alt.appengine.run_on_appengine(client)
sessionToken = gdata.auth.extract_auth_sub_token_from_url(self.request.uri)
client.UpgradeToSessionToken(sessionToken)
logging.info(client.GetAuthSubToken())
what gets logged is "None" so that does seem right :-(
if I use this:
temp = client.upgrade_to_session_token(sessionToken)
logging.info(dump(temp))
I get this:
{'scopes': ['http://www.google.com/calendar/feeds/'], 'auth_header': 'AuthSub token=CNKe7drpFRDzp8uVARjD-s-wAg'}
so I can see that I am getting a AuthSub Token and I guess I could just parse that and grab the token but that doesn't seem like the way things should work.
If I try to use AuthSubTokenInfo I get this:
Traceback (most recent call last):
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/webapp/__init__.py", line 507, in __call__
handler.get(*groups)
File "controllers/indexController.py", line 47, in get
logging.info(client.AuthSubTokenInfo())
File "/Users/matthusby/Dropbox/appengine/projects/FBCal/gdata/service.py", line 938, in AuthSubTokenInfo
token = self.token_store.find_token(scopes[0])
TypeError: 'NoneType' object is unsubscriptable
so it looks like my token_store is not getting filled in correctly, is that something I should be doing?
Also I am using gdata 2.0.9
Thanks
Matt
To answer my own question:
When you get the Token just call:
client.token_store.add_token(sessionToken)
and App Engine will store it in a new entity type for you. Then when making calls to the calendar service just dont set the authsubtoken as it will take care of that for you also.