Zillow API having issues with running basic commands - python

I am trying to use the Zillow API but I keep getting the following error and I'm not sure what I am doing wrong. I posted a screenshot of what my API settings are on Zillow and I think that might be the issue but I am not sure. Asking to get my code checked and if my settings are wrong, I've tried changing it but Zillow keeps telling that the website is experiencing an error when I try to change it so I do not know for sure
import zillow
key = 'my-zillow-key'
address = "3400 Pacific Ave., Marina Del Rey, CA"
postal_code = "90292"
api = zillow.ValuationApi()
data = api.GetSearchResults(key, address, postal_code)
data = api.GetDeepSearchResults(key, "826 Entrada St, Bossier City, LA", "71111")
Error:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/zillow/api
.py", line 130, in GetDeepSearchResults
place.set_data(xmltodict_data.get('SearchResults:sear
chresults', None)['response']['results']['result'])
KeyError: 'response'
During handling of the above exception, another exception
occurred:
NOTE: neither data = api.GetSearchResults(key, address, postal_code)
data = api.GetDeepSearchResults(key, "826 Entrada St, Bossier City, LA", "71111") ran by itself works

There is another library called pyzillow. And the APIs work for me. Maybe you can give it a try.

It seems that the Zillow API is being (very unceremoniously) turned down. It's possible your original issue was different and would have been addressed by swapping to pyzillow, but I suspect at this point you're out of luck unless you can get access to the Bridge APIs that Zillow appears to be migrating to.

Related

OwlReady2 error after using consecutive load()

Been using owlready2 to parse multiple input OWL ontologies. The problem is: i get an error everytime i try to load the second ontology. If i only load one, everything works fine. Whenever i try to load the second i get an error associated with the owlready load() function:
SELECT x FROM transit""", (s, p, p)).fetchall(): yield x
sqlite3.OperationalError: near "WITH": syntax error
Relevant information:
on my machine, i can do as many loads as i want and it works fine
only when porting my code to a linux server of my department in order to get it deployed, this error happens.
Any sugestions?

get all files in drive via REST API

I'm trying to access all my files in my drive via the endpoint
/me/drive/root/children
However it returns 0 children even though the following observations happen:
Calling /me/drive/root returns:
","folder":{"childCount":3},"root":{},"size":28413,"specialFolder":{"name":"documents"}}
More interestingly, doing the API call from the Graph Explorer:
https://graph.microsoft.io/en-us/graph-explorer does show the 3 files that I have when using me/drive/root/children.
The graph explorer matches perfectly the API call when using /me/drive/root, but not /me/drive/root/children.
What is happening?
EDIT:
Following Brad suggestion I decoded the token with https://jwt.io/ and the parameters scp reads:
scp": "Mail.Send User.Read",
Second edit:
I removed all the app permissions from apps.dev.microsoft.com and I still have the same observations. It's looks like the permissions I set there have no effect.
The code above follows the example found at:
https://dev.office.com/code-samples-detail/5989
As it turns out, the whole confusing was coming from here:
microsoft = oauth.remote_app(
'microsoft',
consumer_key=client_id,
consumer_secret=client_secret,
request_token_params={'scope': 'User.Read Mail.Send Files.Read Files.ReadWrite'},
base_url='https://graph.microsoft.com/v1.0/',
request_token_url=None,
access_token_method='POST',
access_token_url='https://login.microsoftonline.com/common/oauth2/v2.0/token',
authorize_url='https://login.microsoftonline.com/common/oauth2/v2.0/authorize'
)
I did not have the right scopes declared in request_token_params. So even if the app has the permissions, without the scopes declared there you cannot access the sheets.

Python 2.7 Yahoo Finance No Definition Found

Hope you are well. I'm Using Python 2.7 and new at it. I'm trying to use yahoo finance API to get information from stocks, here is my code:
from yahoo_finance import Share
yahoo = Share('YHOO')
print yahoo.get_historical('2014-04-25', '2014-04-29')
This code thoug works once out of 4 attempts, the other 3 times gives me this errors:
YQL Query error: Query failed with error: No Definition found for Table yahoo.finance.quote
Is there anyway to fix this error so to have the code working 100% of the time?
Thanks.
Warmest regards
This is a server-side error. The query.yahooapis.com service appears to be handled by a cluster of machines, and some of those machines appear to be misconfigured. This could be a temporary problem.
I see the same error when accessing the API directly using curl:
$ curl "http://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20yahoo.finance.quote%20where%20symbol%20%3D%20%22YHOO%22&format=json&env=store%3a//datatables.org/alltableswithkeys"
{"error":{"lang":"en-US","description":"No definition found for Table yahoo.finance.quote"}}
Other than retrying in a loop, there is no way to fix this on the Python side:
data = None
for i in range(10): # retry 10 times
try:
yahoo = Share('YHOO')
data = yahoo.get_historical('2014-04-25', '2014-04-29')
break
except yahoo_finance.YQLQueryError:
continue
if data is None:
print 'Failed to retrieve data from the Yahoo service, try again later'

How do I get the XML format of Bugzilla given a bug ID using python and XML-RPC?

This question has been updated
I am writing a python script using the python-bugzilla 1.1.0 pypi. I am able to get all the bug IDs but I want to know if there is a way for me to access each bug's XML page? Here is the code I have so far:
bz = bugzilla.Bugzilla(url='https://bugzilla.mycompany.com/xmlrpc.cgi')
try:
bz.login('name#email.com', 'password');
print'Authorization cookie received.'
except bugzilla.BugzillaError:
print(str(sys.exc_info()[1]))
sys.exit(1)
#getting all the bug ID's and displaying them
bugs = bz.query(bz.build_query(assigned_to="your-bugzilla-account"))
for bug in bugs:
print bug.id
I don't know how to access each bug's XML page and not sure if it is even possible to do so. Can anyone help me with this? Thanks.
bz.getbugs()
Will get all bugs, bz.getbugssimple is also worth a look.
#!/usr/bin/env python
import bugzilla
bz = bugzilla.Bugzilla(url='https://bugzilla.company.com/xmlrpc.cgi')
bz.login('username#company.com', 'password')
results = bz.query(bz.url_to_query(queryUrl))
bids = []
for b in results:
bids.append(b.id)
print bids

GeoIPIPSP.dat Invalid datebase type

we have a commercial maxmind-subscribtion to obtain a GeoIP-Database with ISP-information (GeioIPIPSP.dat). However, when I try to query this file, I keep getting the following error:
GeoIPError: Invalid database type, expected Org, ISP or ASNum
I'm using the python-api:
geo = GeoIP.open("/GeoIPIPSP.dat", GeoIP.GEOIP_STANDARD)
isp = geo.name_by_addr(ip) # or isp_by_addr with pygeoip
When I use the api to ask for the database-type (geo._type) I get "1" ... the same value I get when I open a regular GeoIP.dat. I'm wondering if there's something wrong with GeoIPISP.dat, but it's the most recent file from maxmind's customer-download-page.
Any insights greatly appreciated!
It turns out there was a problem with the database-file indeed. After a re-download everything works as it is supposed to.
I switched to pygeoip though and access the database like this:
import pygeoip
geo_isp = pygeoip.GeoIP("/usr/share/GeoIP/GeoIPIPSP.dat")
isp = geo_isp.isp_by_addr("8.8.8.8")

Categories