Retrieving values from dicts - python

Sorry brand new to python so this may be really simple.
I'm trying to retrieve a key value from a dictionary based on the users input. I'm currently trying an if/else statements but have also tried a for loop. My code is below, I just can't seem to find out why it isn't working.
import requests
import os
import json
orgUrl = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
payload = None
headers = {
"Content-Type": "application/json",
"Accept": "application/json",
"X-Cisco-Meraki-API-Key": os.environ.get('MERAKI_DASHBOARD_API_KEY')
}
orgResponse = requests.request('GET', orgUrl, headers=headers, data = payload)
orgResponseJson = orgResponse.json()
#print(orgResponseJson)
orgName = input("Please enter organisation name \n")
if orgName == (orgResponseJson.index['name']):
print (orgResponseJson.index['id'])
else:
print("Organisation does not exist")
If there's any study material out there someone could suggest that would be appreciated. The code is formatted correctly in the IDE but for some reason it hasn't copied across that way.
So I can't show the contents of the dictionary as it is business sensitive information but as an example the values are along the lines of {'name': business 1'}, 'id': 8384965'}}
So the code runs, it prints out the input, doesn't return the value 'id' and then continues to print out the else statement.
I guess it's moved onto the else statement because the if statement isn't looking in the dict?
************* UPDATE *****************
I've solved it. Whether it's best practice or not is another issue. I opted to use a for loop using the break command;
'''p
for orgLicense in orgResponseJson:
if orgName == (orgLicense['name']):
print(orgLicense['licensing'])
break
else:
orgName != (orgLicense['name'])
print("Organisation does not exist")
break
''''
Thank-you
Jake

Related

How to make my bot skip over urls that don't exist

Hey guys I was wondering if there was a way to make my bot skip invalid urls after 1 try to continue with the for loop but continue doesn't seem to work
def check_valid(stripped_results):
global vstripped_results
vstripped_results = []
for tag in stripped_results:
conn = requests.head("https://" + tag)
conn2 = requests.head("http://" + tag)
status_code = conn.status_code
website_is_up = status_code == 200
if website_is_up:
vstripped_results.append(tag)
else:
continue
stripped results is an array of an unknown amount of domains and Subdomains which is why I have the 'https://' part and tbh I'm not even sure whether my if statement is effective or not.
Any help would be greatly appreciated I don't want to get rate limited by discord anymore from sending so many invalid domains through. :(
This is easy. To check the validity of a URL there exist a python library, namely Validators. This library can be used to validate any URL for if it exist or not. Let's take it step by step.
Firstly,
Here is the documentation link for validators:
https://validators.readthedocs.io/en/latest/
How do you validate a link using validators?
It is simple. Let's work on command line for a moment.
This image shows it. This module gives out boolean result on if it is a valid link or not.
Here for the link of this question it gave out True and when it would be false then it would give you the error.
You can validate it using this syntax:
validators.url('Add your URL variable here')
Remember that this gives boolean value so code for it that way.
So you can use it this way...
I wouldn't be implementing it in your code as I want you to try it yourself once. I would help you with this if you are unable to do it.
Thank You! :)
Try this?
def check_valid(stripped_results):
global vstripped_results
vstripped_results = []
for tag in stripped_results:
conn = requests.head("https://" + tag)
conn2 = requests.head("http://" + tag)
status_code = conn.status_code
website_is_up = status_code == 200
if website_is_up:
vstripped_results.append(tag)
else:
#Do the thing here

How do you compare data keys in a dictionary from a JSON file in Python?

I'm very new to python and I'm trying to implement a dictionary of user names and passwords to a JSON File. I also wanted to make an error checker if there was a user name with that already exists, it'll print out an error message. That is what my for loop is for in the else statement. I tried to do "if userName in data:" method while I had my for loop commented but it would skip right over this condition and just go straight to adding it. When I add the first user name to an empty JSON file, it adds properly into the dictionary and into the JSON file. However, after this, I run into problems. When I try to add the second user and password, it gives me a KeyError 0 with my for loop.
def AddUser(userName, password):
#if file is empty, no users inside
if os.stat("users.json").st_size == 0:
user = {'user':userName,'password': password
}
with open ('users.json','w') as f:
json.dump(user,f)
data = json.load(open('users.json'))
print len(data)
print "Success"
sys.exit()
else:
#data = json.load(open('users.json')
for i in range(len(data)):
nameChecker = data[i]['user']
if(nameChecker == userName):
sys.exit("Error: username exists")
#if userName in data:
# sys.exit("Error: username exists")
data = json.load(open('users.json'))
if type (data) is dict:
data = [data]
data.append(
{
'user' : userName,
'password': password
})
with open ('users.json', 'w') as outfile:
json.dump(data,outfile)
print len(data)
print "Success"
I found out that this happens because it is recognizing the len(data) as a length of 2 but there is no 2nd user, it is still 1 user name and password. If I comment out my for loop and disregard my error check to try to add a second user, it will add the second user and add it to the JSON file and but this time it will display a length of 2 again.
My json file looks like this after the first user name entered,
{"password": "password", "user": "jimmy"}
And this is what it looks like after the second one is added when commenting out the for loop
[{"password": "password", "user": "bimmy"}, {"password": "password", "user": "bimmy"}]
If I add a third user it will add the third user properly and display a length of 3 (After 2 users are added it matches the length to the number of users there currently are). I then uncomment my for loop and try to run the code with a user name that already exists, and the for loop will take care of it and display "Error: username exists". Then if I want to add a new user name with the for loop uncommented, it will add it properly without an issue. How can I take care of the error checker when I try to add my second user? and Why does inputting the first user display a length of 2 for the data when there is only one username and one password? I can't seem to get around that issue when I try to add the second user name and password. Also is there a proper way to add more user names after the JSON file was created? I tried to do data.update but I kept getting error messages, and data.append seemed to work.
First time on file creation you should create a list of dict, not just dict, so instead of:
user = {'user':userName,'password': password}
do
user = [{'user':userName,'password': password}]
this above fixes main issue.
Also sys.exit() is probably not needed if you want to call function multiple times in one run. Also I uncommented data = json.load(open('users.json')).
Also would be nice to close opened for read files too, so instead of:
data = json.load(open('users.json'))
do
with open('users.json') as f:
data = json.load(f)
Full corrected with suggestions code down below:
Try it online too!
import sys, os, json
def AddUser(userName, password):
#if file is empty, no users inside
if not os.path.exists('users.json'):
user = [{'user':userName,'password': password}]
with open ('users.json','w') as f:
json.dump(user,f)
data = json.load(open('users.json'))
print(len(data))
print("Success")
#sys.exit()
else:
with open('users.json') as f:
data = json.load(f)
for i in range(len(data)):
nameChecker = data[i]['user']
if(nameChecker == userName):
sys.exit("Error: username exists")
#if userName in data:
# sys.exit("Error: username exists")
#data = json.load(open('users.json'))
if type (data) is dict:
data = [data]
data.append(
{
'user' : userName,
'password': password
})
with open ('users.json', 'w') as outfile:
json.dump(data,outfile)
print(len(data))
print("Success")
# Tests
if os.path.exists('users.json'):
os.remove('users.json')
AddUser('a', 'b')
AddUser('c', 'd')

if <var> is None: doesn't seem to wotk

Disclaimer: I am very new to python, and have no idea what i am doing, i am teaching myself from the web.
I have some code that looks like this
Code:
from request import get # Ed: added for clarity
myurl = URLBASE + args.key
response = get(myurl)
# check key is valid
json = response.text # Ed: this is a requests.Response object
print(json)
if json is None:
sys.exit("Problem getting API data, check your key")
print("how did i get here")
Output:
null
how did i get here
But I have no idea how that is possible ... it literally says it is null in the print, but then doesn't match in the 'if'. Any help would be appreciated.
thx
So I am sure I still don't fully understand, but this "fixes" my problem.
The requests.Response object has Property/Method json - so i should have been using that, thanks wim, instead of text. So changing the code to this (below), as suggested, makes the code work.
from request import get
myurl = URLBASE + args.key
response = get(myurl)
# check key is valid
json = response.json()
if json is None:
sys.exit("Problem getting API data, check your key")
print("how did i get here")
The question (for me inquisitively) remains, how would I do an if statement to determine if a string is null?
Thanks to Ry and wim, for their help.

Python Facebook API - cursor pagination

My question involves learning how to retrieve my entire list of friends using Facebook's Python API. The current result returns an object with limited number of friends and a link to the 'next' page. How do I use this to fetch the next set of friends ? (Please post the link to possible duplicates) Any help would be much appreciated. In general, I need to learn about the pagination involved the API usage.
import facebook
import json
ACCESS_TOKEN = "my_token"
g = facebook.GraphAPI(ACCESS_TOKEN)
print json.dumps(g.get_connections("me","friends"),indent=1)
Sadly the documentation of pagination is an open issue since almost 2 years. You should be able to paginate like this (based on this example) using requests:
import facebook
import requests
ACCESS_TOKEN = "my_token"
graph = facebook.GraphAPI(ACCESS_TOKEN)
friends = graph.get_connections("me","friends")
allfriends = []
# Wrap this block in a while loop so we can keep paginating requests until
# finished.
while(True):
try:
for friend in friends['data']:
allfriends.append(friend['name'].encode('utf-8'))
# Attempt to make a request to the next page of data, if it exists.
friends=requests.get(friends['paging']['next']).json()
except KeyError:
# When there are no more pages (['paging']['next']), break from the
# loop and end the script.
break
print allfriends
Update: There's a new generator method available which implements above behavior and can be used to iterate over all friends like this:
for friend in graph.get_all_connections("me", "friends"):
# Do something with this friend.
Meanwhile I was searching answer here is much better approach:
import facebook
access_token = ""
graph = facebook.GraphAPI(access_token = access_token)
totalFriends = []
friends = graph.get_connections("me", "/friends&summary=1")
while 'paging' in friends:
for i in friends['data']:
totalFriends.append(i['id'])
friends = graph.get_connections("me", "/friends&summary=1&after=" + friends['paging']['cursors']['after'])
At end point you will get one response where data will be empty and then there will be no 'paging' key so at that time it will break and all the data will be stored.
I couldn't find this anywhere, these answers seem super complicated and just no way I would even use an SDK if I had do stuff like that when Paging from a simple POST is so easy to start with, however:
FacebookAdsApi.init(my_app_id, my_app_secret, my_access_token)
my_account = AdAccount('act_23423423423423423')
# In the below, I added the limit to the max rows, 250.
# Also more importantly, paging. the SDK has a really sneaky way of doing this,
# enclose the request in a list() the results end up the same, but this will make the script request new objects until there are no more
#I tested this example and compared to Graph API and as of right now, 1/22 9:47AM, I get 81 from Graph and 81 here.
fields = ['name']
params = {'limit':250}
ads = list(my_account.get_ads(
fields = fields,
params = params,
))
Trick from the docs: "NOTE: We wrap the return value of get_ad_accounts with list() because get_ad_accounts returns an EdgeIterator object (located in facebook_business.adobjects) and we want to get the full list right away instead of having the iterator lazily loading accounts."
https://github.com/facebook/facebook-python-business-sdk
in this example you off set / pagination by one at the time, i think my while loop is simple since it only looking for the pagination key"next" to be none, if doesnt exists means we finish looping, and you will have your results in a list.
in this example i am just looking for all the people call jacob
import requests
import facebook
token = access_token="your token goes here"
fb = facebook.GraphAPI(access_token=token)
limit = 1
offset = 0
data = {"q": "jacob",
"type": "user",
"fields": "id",
"limit": limit,
"offset": offset}
req = fb.request('/search', args=data, method='GET')
users = []
for item in req['data']:
users.append(item["id"])
pag = req['paging']
while pag.get("next") is not None:
offset += limit
data["offset"] = offset
req = fb.request('/search', args=data, method='GET')
for item in req['data']:
users.append(item["id"])
pag = req.get('paging')
print users

Dictionary / JSON issue using Python 2.7

I'm looking at scraping some data from Facebook using Python 2.7. My code basically augments by 1 changing the Facebook profile ID to then capture details returned by the page.
An example of the page I'm looking to capture the data from is graph.facebook.com/4.
Here's my code below:
import scraperwiki
import urlparse
import simplejson
source_url = "http://graph.facebook.com/"
profile_id = 1
while True:
try:
profile_id +=1
profile_url = urlparse.urljoin(source_url, str(profile_id))
results_json = simplejson.loads(scraperwiki.scrape(profile_url))
for result in results_json['results']:
print result
data = {}
data['id'] = result['id']
data['name'] = result['name']
data['first_name'] = result['first_name']
data['last_name'] = result['last_name']
data['link'] = result['link']
data['username'] = result['username']
data['gender'] = result['gender']
data['locale'] = result['locale']
print data['id'], data['name']
scraperwiki.sqlite.save(unique_keys=['id'], data=data)
#time.sleep(3)
except:
continue
profile_id +=1
I am using the scraperwiki site to carry out this check but no data is printed back to console despite the line 'print data['id'], data['name'] used just to check the code is working
Any suggestions on what is wrong with this code? As said, for each returned profile, the unique data should be captured and printed to screen as well as populated into the sqlite database.
Thanks
Any suggestions on what is wrong with this code?
Yes. You are swallowing all of your errors. There could be a huge number of things going wrong in the block under try. If anything goes wrong in that block, you move on without printing anything.
You should only ever use a try / except block when you are looking to handle a specific error.
modify your code so that it looks like this:
while True:
profile_id +=1
profile_url = urlparse.urljoin(source_url, str(profile_id))
results_json = simplejson.loads(scraperwiki.scrape(profile_url))
for result in results_json['results']:
print result
data = {}
# ... more ...
and then you will get detailed error messages when specific things go wrong.
As for your concern in the comments:
The reason I have the error handling is because, if you look for
example at graph.facebook.com/3, this page contains no user data and
so I don't want to collate this info and skip to the next user, ie. no
4 etc
If you want to handle the case where there is no data, then find a way to handle that case specifically. It is bad practice to swallow all errors.

Categories