I keep getting this exception from TweetStream 1.1.1, "exception.code == 404:uthenticationError("Access denied")" It worked last week and now it doesn't. I have tried different usernames and passwords. I can log into twitter with my account information. I even deleted and reinstalled the module. what gives? Thanks for the help!
I try running this...
import tweetstream
stream = tweetstream.SampleStream("MY_USERNAME", "MY_PASSWORD")
for tweet in stream:
print tweet
The error actually looks like this:
Traceback (most recent call last):
File "<pyshell#28>", line 1, in <module>
for tweet in stream:
File "C:\Python27\lib\site-packages\tweetstream-1.1.1-py2.7.egg\tweetstream\streamclasses.py", line 165, in __iter__
self._init_conn()
File "C:\Python27\lib\site-packages\tweetstream-1.1.1-py2.7.egg\tweetstream\streamclasses.py", line 103, in _init_conn
raise AuthenticationError("Access denied")
AuthenticationError: Access denied
Twitter released the next version of API (1.1). And tweetstream doesn't support it yet. See relevant issue on tweetstream project issue tracker.
Had the same problem here, and I could not get the patched version mentioned on the project issue tracker (linked by #alecxe) to work either.
Twitter provides a list of libraries that should work with the newer API, at https://dev.twitter.com/docs/twitter-libraries
It lists many, including several for Python.
Related
I'm subscribed to skillshare but turns out skillshare's UI is a huge mess and unproductive for learning. So, I am seeking for a way to mass download course(single course) at once.
I found this github.
https://github.com/crazygroot/skillsharedownloader
And it has a google collab link as well at the bottom.
https://colab.research.google.com/drive/1hUUPDDql0QLul7lB8NQNaEEq-bbayEdE#scrollTo=xunEYHutBEv%2F
I'm getting the below error:
Traceback (most recent call last):
File "/root/Skillsharedownloader/ss.py", line 11, in <module>
dl.download_course_by_url(course_url)
File "/root/Skillsharedownloader/downloader.py", line 34, in download_course_by_url
raise Exception('Failed to parse class ID from URL')
Exception: Failed to parse class ID from URL
This is the course link that I'm using:
https://www.skillshare.com/en/classes/React-JS-Learn-by-examples/1186943986/
I have encountered a similar issue. The problem was that it requires the URL to look like https://www.skillshare.com/classes/.*?/(\d+).
If your copying the URL from the address bar, check the URL again and make sure it has the same format. The current one looks like https://www.skillshare.com/en/classes/xxxxx. So simply remove /en.
I'm trying to upload a video to Youtube using a python script.
So the code given here (upload_video.py) is supposed to work and I've followed the set up which includes enabling the Youtube API and getting OAuth secret keys and what not. You may notice that the code is in Python 2 so I used 2to3 to make it run with python3.7. The issue is that for some reason, I'm asked to login when I execute upload_video.py:
Now this should not be occuring as that's the whole point of having a client_secrets.json file, that you don't need to explicitly login. So once I exit this in-shell browser, Here's what I see:
Here's the first line:
/usr/lib/python3.7/site-packages/oauth2client/_helpers.py:255: UserWarning: Cannot access upload_video.py-oauth2.json: No such file or directory
warnings.warn(_MISSING_FILE_MESSAGE.format(filename))
Now I don't understand why upload_video.py-oauth2.json is needed as in the upload_video.py file, the oauth2 secret file is set as "client_secrets.json".
Anyways, I created the file upload_video.py-oauth2.json and copied the contents of client_secrets.json to it. I didn't get the weird login then but I got another error:
Traceback (most recent call last):
File "upload_video.py", line 177, in <module>
youtube = get_authenticated_service(args)
File "upload_video.py", line 80, in get_authenticated_service
credentials = storage.get()
File "/usr/lib/python3.7/site-packages/oauth2client/client.py", line 407, in get
return self.locked_get()
File "/usr/lib/python3.7/site-packages/oauth2client/file.py", line 54, in locked_get
credentials = client.Credentials.new_from_json(content)
File "/usr/lib/python3.7/site-packages/oauth2client/client.py", line 302, in new_from_json
module_name = data['_module']
KeyError: '_module'
So basically now I've hit a dead end. Any ideas about what to do now?
See the code of function get_authenticated_service in upload_video.py: you should not create the file upload_video.py-oauth2.json by yourself! This file is created upon the completion of the OAuth2 flow via the call to run_flow within get_authenticated_service.
Also you may read the doc OAuth 2.0 for Mobile & Desktop Apps for thorough info about the authorization flow on standalone computers.
While i using facebook_scraper libraries to get post from facebook page with this code.
from facebook_scraper import get_posts
for post in get_posts('ThaiPBSFan', pages = 50):
print(post['text'][:100])
It work with few post, then error like this.
Traceback (most recent call last):
File ".\main.py", line 2, in <module>
for post in get_posts('ThaiPBSFan', pages = 50):
File "C:\Users\admin\AppData\Local\Programs\Python\Python37-32\lib\site-packages\facebook_scraper.py", line 75, in _get_posts
yield _extract_post(article)
File "C:\Users\admin\AppData\Local\Programs\Python\Python37-32\lib\site-packages\facebook_scraper.py", line 102, in _extract_post
text, post_text, shared_text = _extract_text(article)
File "C:\Users\admin\AppData\Local\Programs\Python\Python37-32\lib\site-packages\facebook_scraper.py", line 137, in _extract_text
nodes = article.find('p, header')
AttributeError: 'NoneType' object has no attribute 'find'
So what's a problem and how can i fix it.
From the traceback, it seems that facebook_scraper is not returning a valid post; this may be because there are no further posts to find on the page.
Therefore, you could use a try/except block to catch this exception, i.e.:
try:
for post in get_posts('ThaiPBSFan', pages=50):
print(post['text'][:100])
except AttributeError:
print("No more posts to get")
It's not ideal as you would preferably be able to get a more specific exception once there were no more posts to retrieve, but it should work in your case. Be careful with the code insider your try clause - if an AttributeError is raise anywhere else, you will miss it.
I had the same issue, but only when using the most recent version of the package (0.1.12). Try with an older version of the package. For example, I tried the version 0.1.4 and it worked well. To install it, write:
pip install facebook_scraper==0.1.4
in your terminal.
I am trying to use the Google Drive API to download publicly available files however whenever I try to proceed I get an import error.
For reference, I have successfully set up the OAuth2 such that I have a client id as well as a client secret , and a redirect url however when I try setting it up I get an error saying the object has no attribute urllen
>>> from apiclient.discovery import build
>>> from oauth2client.client import OAuth2WebServerFlow
>>> flow = OAuth2WebServerFlow(client_id='not_showing_client_id', client_secret='not_showing_secret_id', scope='https://www.googleapis.com/auth/drive', redirect_uri='https://www.example.com/oauth2callback')
>>> auth_uri = flow.step1_get_authorize_url()
>>> code = '4/E4h7XYQXXbVNMfOqA5QzF-7gGMagHSWm__KIH6GSSU4#'
>>> credentials = flow.step2_exchange(code)
And then I get the error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Python/2.7/site-packages/oauth2client/util.py", line
137, in positional_wrapper
return wrapped(*args, **kwargs)
File "/Library/Python/2.7/site-packages/oauth2client/client.py", line
1980, in step2_exchange
body = urllib.parse.urlencode(post_data)
AttributeError: 'Module_six_moves_urllib_parse' object has no attribute
'urlencode'
Any help would be appreciated, also would someone mind enlightening me as to how I instantiate a drive_file because according to https://developers.google.com/drive/web/manage-downloads, I need to instantiate one and I am unsure of how to do so.
Edit: So I figured out why I was getting the error I got before. If anyone else is having the same problem then try running.
sudo pip install -I google-api-python-client==1.3.2
However I am still unclear about the drive instance so any help with that would be appreciated.
Edit 2: Okay so I figured out the answer to my whole question. The drive instance is just the metadata which results when we use the API to search for a file based on its id
So as I said in my edits try the sudo pip install and a file instance is just a dictionary of meta data.
I am trying to develop a gtk3 desktop application using python to perform the basic twitter functions like accessing the home timeline of a user, making tweets etc.
I am using python-twitter library, but am unable to find the API call for the purpose. I checked and saw there were a few patches , but they dont seem to work. the rest of the functions I am able to accomplish using the library.
I need help!!!
[edit]
this is the error i am facing when i tried using a fork of the python-twitter library, as given on: http://github.com/jaytaylor/python-twitter-api
Error:
>>api.getUserTimeline('gaurav_sood91')
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "twitter.py", line 2646, in getUserTimeline
self._checkForTwitterError(data)
File "twitter.py", line 3861, in _checkForTwitterError
if data.has_key('next_cursor'):
AttributeError: 'list' object has no attribute 'has_key'
Using the python-twitter module from code.google.com, documentation here.
Accessing user timelines:
import twitter
api = twitter.Api()
statuses = api.GetUserTimeline('#gaurav_sood91')
print [s.text for s in statuses]
Posting tweets:
import twitter
api = twitter.Api(consumer_key='consumer_key',
consumer_secret='consumer_secret',
access_token_key='access_token',
access_token_secret='access_token_secret')
status = api.PostUpdate('This is my update text.')
Edit for applying GetHomeTimeline patch:
Disclaimer: I'm on Windows, so you may need to change these steps a bit.
Download python-twitter
Extract to folder
Download 0002-Support-for-home-timeline.patch file from issue 152
Copy/move patch file to root of extracted python-twitter directory (there should be a file named twitter.py in this dir)
Run command: patch twitter.py 0002-Support-for-home-timeline.patch, you should get a message that patch succeeded
In same directory, run command: python setup.py install
Run interactive python shell: import twitter, dir(twitter.Api)
You should see the GetHomeTimeline method listed.
Update for GetHomeTimeline:
Found patch in issue 152 that works well using OAuth and JSON parse method that is now part of Status class. Sample code:
import twitter
api = twitter.Api(consumer_key='consumer_key',
consumer_secret='consumer_secret',
access_token_key='access_token',
access_token_secret='access_token_secret')
statuses = api.GetHomeTimeline()
print [s.text for s in statuses]