python facebook-sdk fan_count issue - python

I cannot figure out how to get fan_count from a page.
I always get this error
Traceback (most recent call last):
File "./facebook_api.py", line 37, in <module>
facebook_graph.get_object('somepublicpage')['fan_count']
KeyError: 'fan_count'
The object only contains id/name and I cannot figure out how to give more permissions in order to get the 'fan_count' data.
Here is the code i'm using:
import facebook
import urllib
import urlparse
import subprocess
import warnings
warnings.filterwarnings('ignore', category=DeprecationWarning)
oauth_args = dict(client_id = FACEBOOK_APP_ID,
client_secret = FACEBOOK_APP_SECRET,
grant_type = 'client_credentials')
oauth_curl_cmd = ['curl',
'https://graph.facebook.com/oauth/access_token?' + urllib.urlencode(oauth_args)]
oauth_response = subprocess.Popen(oauth_curl_cmd,
stdout = subprocess.PIPE,
stderr = subprocess.PIPE).communicate()[0]
try:
oauth_access_token = urlparse.parse_qs(str(oauth_response))['access_token'][0]
except KeyError:
print('Unable to grab an access token!')
exit()
print oauth_access_token
facebook_graph = facebook.GraphAPI(oauth_access_token)
print facebook_graph.get_object(PROFILE_ID)['fan_count']

Since v2.4 of the Graph API, you have to specify the fields you want to get returned. This would be the correct API call:
/{page-id}?fields=name,fan_count
It is called "Declarative Fields".

Related

How to start a Plex stream using python-PlexAPI?

I'm making a python script using the module PlexAPI. My goal is to start a stream on the client Chrome. The movie to view has the key /library/metadata/1.
Documentation and sources of code:
playMedia documentation
Example 4 is used but changed to fit my requirements
I'm using the key parameter
from plexapi.server import PlexServer
baseurl = 'http://xxx.xxx.xxx.xxx:xxxxx'
token = 'xxxxx'
plex = PlexServer(baseurl, token)
client = plex.client("Chrome")
client.playMedia(key="/library/metadata/1")
This gives the following error:
Traceback (most recent call last):
File "start_stream.py", line 7, in <module>
client.playMedia(key="/library/metadata/1")
TypeError: playMedia() missing 1 required positional argument: 'media'
So I edit the file:
client.playMedia(key="/library/metadata/1")
#changed to
client.playMedia(key="/library/metadata/1", media="movie")
But then I get a different error:
Traceback (most recent call last):
File "start_stream.py", line 7, in <module>
client.playMedia(key="/library/metadata/1", media="movie")
File "/usr/local/lib/python3.8/dist-packages/plexapi/client.py", line 497, in playMedia
server_url = media._server._baseurl.split(':')
AttributeError: 'str' object has no attribute '_server'
I don't really understand what's going on. Can someone help?
I was able to make it work the following way:
from plexapi.server import PlexServer
baseurl = 'http://xxx.xxx.xxx.xxx:xxxxx'
token = 'xxxxx'
plex = PlexServer(baseurl, token)
key = '/library/metadata/1'
client = plex.client("Chrome")
media = plex.fetchItem(key)
client.playMedia(media)

Use rtkit to get content from tickets in Request Tracker

i'm trying to get some contents from tickets with REST api in Ubuntu 16.04 and i'm having truble getting that content using the next code :
from rtkit.resource import RTResource
from rtkit.authenticators import QueryStringAuthenticator
from rtkit.errors import RTResourceError
from rtkit import set_logging
import logging
import re
set_logging('debug')
logger = logging.getLogger('rtkit')
resource = RTResource('http://ubuntu/rt/REST/1.0/', 'root', '**passwd**', QueryStringAuthenticator)
try:
response = resource.get(path='ticket/2')
myTicket = response.as_object() ## Returns an RtObj instance
except RTResourceError as e:
logger.error(e.response.status_int)
logger.error(e.response.status)
logger.error(e.response.parsed)
And the terminal is giving this error:
File "LoginQuery.py", line 85, in <module>
myTicket = response.as_object() ## Returns an RtObj instance
AttributeError: 'RTResponse' object has no attribute 'as_object'
Did someone had this problem too?? and know how to solve it??
Help :)
according to the package documentation it seems like the proper way to read the response is to use response.parsed:
try:
response = resource.get(path='ticket/1')
for r in response.parsed:
for t in r:
logger.info(t)
except RTResourceError as e:
logger.error(e.response.status_int)
logger.error(e.response.status)
logger.error(e.response.parsed)
Yes, but i was trying to get the information from the contents separately... and some hours latter i cam with this :
try:
response = resource.get(path='ticket/2')
Ticket = response.parsed
Criation = Ticket[0][12][1]
This allow me to get de date when it was created

Get virtual machine created time on Azure using Python API

My requirement is to get all VMs in a subscription with launch(created) time. I didn't find the VM created time in the dashboard where as in the Activity log found a timestamp. I would like to fetch all VMs which were created by one subscription id along with created time.
(For this account details 2FA is enabled so - UserPassCredentials won't work )
List of all VMs in a subscription id:
import os
from azure.common.credentials import ServicePrincipalCredentials
from azure.mgmt.compute import ComputeManagementClient
subscription_id = os.environ['AZURE_SUBSCRIPTION_ID']
credentials = ServicePrincipalCredentials(client_id=os.environ['AZURE_CLIENT_ID'], secret=os.environ['AZURE_CLIENT_SECRET'], tenant=os.environ['AZURE_TENANT_ID'])
compute_client = ComputeManagementClient(credentials, subscription_id)
for vm in compute_client.virtual_machines.list_all():
print("\tVM: {}".format(vm.name))
Fetch created time from Activity log:
import os
import datetime
from pprint import pprint
from azure.monitor import MonitorClient
from azure.common.credentials import ServicePrincipalCredentials
today = datetime.datetime.now().date()
filter = " and ".join([ "eventTimestamp le '{}T00:00:00Z'".format(today), "resourceGroupName eq 'test-group'" ])
subscription_id = 'xxxxx'
credentials = ServicePrincipalCredentials(client_id=os.environ['AZURE_CLIENT_ID'], secret=os.environ['AZURE_CLIENT_SECRET'], tenant=os.environ['AZURE_TENANT_ID'])
client = MonitorClient(credentials, subscription_id)
select = ",".join([ "Administrative", "Write VirtualMachines" ])
activity_logs = client.activity_logs.list( filter=filter, select=select )
for i in activity_logs:
pprint(i.__dict__)
I'm able to get the all VMs(1st sample program), However while trying to fetch the Activity log get some error(2nd sample program).
Error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Python/2.7/site-packages/msrest/paging.py", line 109, in __next__
self.advance_page()
File "/Library/Python/2.7/site-packages/msrest/paging.py", line 95, in advance_page
self._response = self._get_next(self.next_link)
File "/Library/Python/2.7/site-packages/azure/monitor/operations/activity_logs_operations.py", line 117, in internal_paging
raise models.ErrorResponseException(self._deserialize, response)
azure.monitor.models.error_response.ErrorResponseException: Operation returned an invalid status code 'Bad Request'
Can somebody help me to find the issue please? any help really appreciated.
I tried to fetch my active log of resource group today by using the code you provided and I reproduce your issue.
My code:
import os
import datetime
from pprint import pprint
from azure.monitor import MonitorClient
from azure.common.credentials import ServicePrincipalCredentials
subscription_id = '***'
client_id='***'
secret='***'
tenant='***'
today = datetime.datetime.now().date()
filter = " and ".join([ "eventTimestamp le '{}T00:00:00Z'".format(today), "resourceGroupName eq 'jay'" ])
credentials = ServicePrincipalCredentials(client_id=client_id, secret=secret, tenant=tenant)
client = MonitorClient(credentials, subscription_id)
select = ",".join([ "eventName", "operationName" ])
print select
print filter
activity_logs = client.activity_logs.list( filter=filter, select=select )
for log in activity_logs:
# assert isinstance(log, azure.monitor.models.EventData)
print(" ".join([
log.event_name.localized_value,
log.operation_name.localized_value
]))
Running result:
eventName,operationName
eventTimestamp le '2017-10-17T00:00:00Z' and resourceGroupName eq 'jay'
Traceback (most recent call last):
File "E:/PythonWorkSpace/ActiveLog/FetchActiveLog.py", line 24, in <module>
for log in activity_logs:
File "E:\Python27\lib\site-packages\msrest\paging.py", line 109, in __next__
self.advance_page()
File "E:\Python27\lib\site-packages\msrest\paging.py", line 95, in advance_page
self._response = self._get_next(self.next_link)
File "E:\Python27\lib\site-packages\azure\monitor\operations\activity_logs_operations.py", line 117, in internal_paging
raise models.ErrorResponseException(self._deserialize, response)
azure.monitor.models.error_response.ErrorResponseException: Operation returned an invalid status code 'Bad Request'
After rearching the Azure Monitor Python SDK, I found the difference.
filter = " and ".join([ "eventTimestamp ge '{}T00:00:00Z'".format(today), "resourceGroupName eq 'jay'" ])
Here is ge ,not le.
I modify the keyword then the code works well for me.
eventName,operationName
eventTimestamp ge '2017-10-17T00:00:00Z' and resourceGroupName eq 'jay'
End request Microsoft.Compute/virtualMachines/delete
End request Microsoft.Compute/virtualMachines/delete
End request Microsoft.Compute/virtualMachines/delete
Begin request Microsoft.Compute/virtualMachines/delete
End request Microsoft.Compute/virtualMachines/deallocate/action
End request Microsoft.Compute/virtualMachines/deallocate/action
Begin request Microsoft.Compute/virtualMachines/deallocate/action
End request Microsoft.Compute/virtualMachines/write
End request Microsoft.Compute/disks/write
End request Microsoft.Compute/virtualMachines/write
End request Microsoft.Network/networkSecurityGroups/write
End request Microsoft.Network/networkInterfaces/write
End request Microsoft.Network/publicIPAddresses/write
Hope it helps you.
Call az cli from python
use below command
az vm list
This will list json data with fields and you can filter
date = vm['timeCreated']
//"timeCreated": "2022-06-24T14:13:00.326985+00:00",
Based on the doc, it seems your date should be escaped. Moreover, seems they take a datetime (and not a date):
https://learn.microsoft.com/en-us/rest/api/monitor/activitylogs
filter = " and ".join([
"eventTimestamp le '{}T00:00:00Z'".format(today),
"resourceGroupName eq 'test-group'"
])

Why does this python script work on my local machine but not on Heroku?

there. I'm building a simple scraping tool. Here's the code that I have for it.
from bs4 import BeautifulSoup
import requests
from lxml import html
import gspread
from oauth2client.service_account import ServiceAccountCredentials
import datetime
scope = ['https://spreadsheets.google.com/feeds']
credentials = ServiceAccountCredentials.from_json_keyfile_name('Programming
4 Marketers-File-goes-here.json', scope)
site = 'http://nathanbarry.com/authority/'
hdr = {'User-Agent':'Mozilla/5.0'}
req = requests.get(site, headers=hdr)
soup = BeautifulSoup(req.content)
def getFullPrice(soup):
divs = soup.find_all('div', id='complete-package')
price = ""
for i in divs:
price = i.a
completePrice = (str(price).split('$',1)[1]).split('<', 1)[0]
return completePrice
def getVideoPrice(soup):
divs = soup.find_all('div', id='video-package')
price = ""
for i in divs:
price = i.a
videoPrice = (str(price).split('$',1)[1]).split('<', 1)[0]
return videoPrice
fullPrice = getFullPrice(soup)
videoPrice = getVideoPrice(soup)
date = datetime.date.today()
gc = gspread.authorize(credentials)
wks = gc.open("Authority Tracking").sheet1
row = len(wks.col_values(1))+1
wks.update_cell(row, 1, date)
wks.update_cell(row, 2, fullPrice)
wks.update_cell(row, 3, videoPrice)
This script runs on my local machine. But, when I deploy it as a part of an app to Heroku and try to run it, I get the following error:
Traceback (most recent call last):
File "/app/.heroku/python/lib/python3.6/site-packages/gspread/client.py", line 219, in put_feed
r = self.session.put(url, data, headers=headers)
File "/app/.heroku/python/lib/python3.6/site-packages/gspread/httpsession.py", line 82, in put
return self.request('PUT', url, params=params, data=data, **kwargs)
File "/app/.heroku/python/lib/python3.6/site-packages/gspread/httpsession.py", line 69, in request
response.status_code, response.content))
gspread.exceptions.RequestError: (400, "400: b'Invalid query parameter value for cell_id.'")
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "AuthorityScraper.py", line 44, in
wks.update_cell(row, 1, date)
File "/app/.heroku/python/lib/python3.6/site-packages/gspread/models.py", line 517, in update_cell
self.client.put_feed(uri, ElementTree.tostring(feed))
File "/app/.heroku/python/lib/python3.6/site-packages/gspread/client.py", line 221, in put_feed
if ex[0] == 403:
TypeError: 'RequestError' object does not support indexing
What do you think might be causing this error? Do you have any suggestions for how I can fix it?
There are a couple of things going on:
1) The Google Sheets API returned an error: "Invalid query parameter value for cell_id":
gspread.exceptions.RequestError: (400, "400: b'Invalid query parameter value for cell_id.'")
2) A bug in gspread caused an exception upon receipt of the error:
TypeError: 'RequestError' object does not support indexing
Python 3 removed __getitem__ from BaseException, which this gspread error handling relies on. This doesn't matter too much because it would have raised an UpdateCellError exception anyways.
My guess is that you are passing an invalid row number to update_cell. It would be helpful to add some debug logging to your script to show, for example, which row it is trying to update.
It may be better to start with a worksheet with zero rows and use append_row instead. However there does seem to be an outstanding issue in gspread with append_row, and it may actually be the same issue you are running into.
I encountered the same problem. BS4 works fine at a local machine. However, for some reason, it is way too slow in the Heroku server resulting into giving error.
I switched to lxml and it is working fine now.
Install it by command:
pip install lxml
A sample code snippet is given below:
from lxml import html
import requests
getpage = requests.get("https://url_here")
gethtmlcontent = html.fromstring(getpage.content)
data = gethtmlcontent.xpath('//div[#class = "class-name"]/text()')
#this is a sample for fetching data from the dummy div
data = data[0:n] # as per your requirement
#now inject the data into django tmeplate.

Getting key error and Import error in Python code for YouTube data API

I am trying to run Python code to download YouTube data using the generated API key for YouTube. My problem is whenever I try to run the code I am getting warnings and error. The code worked once when I downloaded it from Coursera, but now after getting me the results once it has stopped working.
The output of this code is a CSV file containing video data like like count, view count, comment count dislike count, favorite count, etc. which I would use later to do some statistical analysis on R or Python as part of my course on Coursera.
PFB the code which I used: xxxxx is the API key for me which I generated from Google YouTube data API v3
Enter code here
# -*- coding: utf-8 -*-
from apiclient.discovery import build
#from apiclient.errors import HttpError
#from oauth2client.tools import argparser # removed by Dongho
import argparse
import csv
import unidecode
# Set DEVELOPER_KEY to the API key value from the APIs & authentication ? Registered apps
# tab of
# https://cloud.google.com/console
# Please ensure that you have enabled the YouTube Data API for your project.
DEVELOPER_KEY = "xxxxxxxxxxxx"
YOUTUBE_API_SERVICE_NAME = "youtube"
YOUTUBE_API_VERSION = "v3"
def youtube_search(options):
youtube = build(YOUTUBE_API_SERVICE_NAME, YOUTUBE_API_VERSION, developerKey=DEVELOPER_KEY)
# Call the search.list method to retrieve results matching the specified
# Query term.
search_response = youtube.search().list(q=options.q, part="id,snippet", maxResults=options.max_results).execute()
videos = []
channels = []
playlists = []
# Create a CSV output for video list
csvFile = open('video_result.csv','w')
csvWriter = csv.writer(csvFile)
csvWriter.writerow(["title","videoId","viewCount","likeCount","dislikeCount","commentCount","favoriteCount"])
# Add each result to the appropriate list, and then display the lists of
# matching videos, channels, and playlists.
for search_result in search_response.get("items", []):
if search_result["id"]["kind"] == "youtube#video":
#videos.append("%s (%s)" % (search_result["snippet"]["title"],search_result["id"]["videoId"]))
title = search_result["snippet"]["title"]
title = unidecode.unidecode(title) # Dongho 08/10/16
videoId = search_result["id"]["videoId"]
video_response = youtube.videos().list(id=videoId,part="statistics").execute()
for video_result in video_response.get("items",[]):
viewCount = video_result["statistics"]["viewCount"]
if 'likeCount' not in video_result["statistics"]:
likeCount = 0
else:
likeCount = video_result["statistics"]["likeCount"]
if 'dislikeCount' not in video_result["statistics"]:
dislikeCount = 0
else:
dislikeCount = video_result["statistics"]["dislikeCount"]
if 'commentCount' not in video_result["statistics"]:
commentCount = 0
else:
commentCount = video_result["statistics"]["commentCount"]
if 'favoriteCount' not in video_result["statistics"]:
favoriteCount = 0
else:
favoriteCount = video_result["statistics"]["favoriteCount"]
csvWriter.writerow([title,videoId,viewCount,likeCount,dislikeCount,commentCount,favoriteCount])
csvFile.close()
if __name__ == "__main__":
parser = argparse.ArgumentParser(description='Search on YouTube')
parser.add_argument("--q", help="Search term", default="Google")
parser.add_argument("--max-results", help="Max results", default=25)
args = parser.parse_args()
#try:
youtube_search(args)
#except HttpError, e:
# print ("An HTTP error %d occurred:\n%s" % (e.resp.status, e.content))
Whenever I run the code I get following errors:
viewCount = video_result[u'statistics']["viewCount"]
KeyError: 'statistics'
WARNING:googleapiclient.discovery_cache:file_cache is unavailable when using oauth2client >= 4.0.0
Traceback (most recent call last):
File "C:\Anaconda3\Anaconda3 4.2.0\lib\site-packages\googleapiclient\discovery_cache__init__.py", line 36, in autodetect
from google.appengine.api import memcache
ImportError: No module named 'google'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Anaconda3\Anaconda3 4.2.0\lib\site-packages\googleapiclient\discovery_cache\file_cache.py", line 33, in
from oauth2client.contrib.locked_file import LockedFile
ImportError: No module named 'oauth2client.contrib.locked_file
How do I overcome this error?
Can you please check whether you are having statistics as a key in video_result. The KeyError occurs when you try to access a Key in a Python Dict, which is not present. So, the better way would be to use 'get()' while looking for a key,
Eg: video_result.get('statistics')
This will handle you Key Error.
The second error is coming due to the missing import statements. The file is not able to import the imported file/function, and that's why it is throwing an exception while reporting an exception.

Categories