GeoIPIPSP.dat Invalid datebase type - python

we have a commercial maxmind-subscribtion to obtain a GeoIP-Database with ISP-information (GeioIPIPSP.dat). However, when I try to query this file, I keep getting the following error:
GeoIPError: Invalid database type, expected Org, ISP or ASNum
I'm using the python-api:
geo = GeoIP.open("/GeoIPIPSP.dat", GeoIP.GEOIP_STANDARD)
isp = geo.name_by_addr(ip) # or isp_by_addr with pygeoip
When I use the api to ask for the database-type (geo._type) I get "1" ... the same value I get when I open a regular GeoIP.dat. I'm wondering if there's something wrong with GeoIPISP.dat, but it's the most recent file from maxmind's customer-download-page.
Any insights greatly appreciated!

It turns out there was a problem with the database-file indeed. After a re-download everything works as it is supposed to.
I switched to pygeoip though and access the database like this:
import pygeoip
geo_isp = pygeoip.GeoIP("/usr/share/GeoIP/GeoIPIPSP.dat")
isp = geo_isp.isp_by_addr("8.8.8.8")

Related

Zillow API having issues with running basic commands

I am trying to use the Zillow API but I keep getting the following error and I'm not sure what I am doing wrong. I posted a screenshot of what my API settings are on Zillow and I think that might be the issue but I am not sure. Asking to get my code checked and if my settings are wrong, I've tried changing it but Zillow keeps telling that the website is experiencing an error when I try to change it so I do not know for sure
import zillow
key = 'my-zillow-key'
address = "3400 Pacific Ave., Marina Del Rey, CA"
postal_code = "90292"
api = zillow.ValuationApi()
data = api.GetSearchResults(key, address, postal_code)
data = api.GetDeepSearchResults(key, "826 Entrada St, Bossier City, LA", "71111")
Error:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/zillow/api
.py", line 130, in GetDeepSearchResults
place.set_data(xmltodict_data.get('SearchResults:sear
chresults', None)['response']['results']['result'])
KeyError: 'response'
During handling of the above exception, another exception
occurred:
NOTE: neither data = api.GetSearchResults(key, address, postal_code)
data = api.GetDeepSearchResults(key, "826 Entrada St, Bossier City, LA", "71111") ran by itself works
There is another library called pyzillow. And the APIs work for me. Maybe you can give it a try.
It seems that the Zillow API is being (very unceremoniously) turned down. It's possible your original issue was different and would have been addressed by swapping to pyzillow, but I suspect at this point you're out of luck unless you can get access to the Bridge APIs that Zillow appears to be migrating to.

Is it possible to read a .csv from a remote server, using Paramiko and Dask's read_csv() method in conjunction?

Today I began using the Dask and Paramiko packages, partly as a learning exercise, and partly because I'm beginning a project that will require dealing with large datasets (10s of GB) that must be accessed from a remote VM only (i.e. cannot store locally).
The following piece of code belongs to a short, helper program that will make a dask dataframe of a large csv file hosted on the VM. I want to later pass its output (reference to the dask dataframe) to a second function that will perform some overview analysis on it.
import dask.dataframe as dd
import paramiko as pm
import pandas as pd
import sys
def remote_file_to_dask_dataframe(remote_path):
if isinstance(remote_path, (str)):
try:
client = pm.SSHClient()
client.load_system_host_keys()
client.connect('#myserver', username='my_username', password='my_password')
sftp_client = client.open_sftp()
remote_file = sftp_client.open(remote_path)
df = dd.read_csv(remote_file)
remote_file.close()
sftp_client.close()
return df
except:
print("An error occurred.")
sftp_client.close()
remote_file.close()
else:
raise ValueError("Path to remote file as string required")
The code is neither nice nor complete, and I will replace username and password with ssh keys in time, but this is not the issue. In a jupyter notebook, I've previously opened the sftp connection with a path to a file on the server, and read it into a dataframe with a regular Pandas read_csv call. However, here the equivalent line, using Dask, is the source of the problem:df = dd.read_csv(remote_file).
I've looked at the documentation online (here), but I can't tell whether what I'm trying above is possible. It seems that for networked options, Dask wants a url. The parameter passing options for, e.g. S3, appear to depend on that infrastructure's backend. I unfortunately cannot make any sense of the dash-ssh documentation (here).
I've poked around with print statements and the only line that fails to execute is the one stated. The error risen is: raise TypeError('url type not understood: %s' % urlpath)
TypeError: url type not understood:
Can anybody point me in the right direction for achieving what I'm trying to do? I'd expected Dask's read_csv to function as Pandas' had, as it's based on the same.
I'd appreciate any help, thanks.
p.s. I'm aware of Pandas' read_csv chunksize option, but I would like to achieve this through Dask, if possible.
In the master version of Dask, file-system operations are now using fsspec which, along with the previous implementations (s3, gcs, hdfs) now supports some additional file-systems, see the mapping to protocol identifiers fsspec.registry.known_implementations.
In short, using a url like "sftp://user:pw#host:port/path" should now work for you, if you install fsspec and Dask from master.
It seems that you would have to implement their "file system" interface.
I'm not sure what is minimal set of methods that you need to implement to allow read_csv. But you definitely have to implement the open.
class SftpFileSystem(object):
def open(self, path, mode='rb', **kwargs):
return sftp_client.open(path, mode)
dask.bytes.core._filesystems['sftp'] = SftpFileSystem
df = dd.read_csv('sftp://remote/path/file.csv')

Read from nextion touchdisplay via python in Win10 over USB/TTL converter

today I tried desperately to read values from a nextiondisplay in my python code.
Writing to it works, but i simply can't manage to get python to read from it.
My code looks like this:
def ser_escape():
escape='\xff'.encode('iso-8859-1')
ser.write(escape)
ser.write(escape)
ser.write(escape)
import serial
import pynextion
EndCom= "\xff\xff\xff"
ser = serial.Serial(port='COM4',baudrate=9600)
test=b't0.txt="MyText"'
ser.write(test)
ser_escape()
ser.flush
ser_escape()
ser.flush
ser.write(b'get t0.txt')
print (ser.read())
ser_escape()
ser.close()
The output is just: b'\x1a'
Which isn't anything close to the behaviour expected - at least not from me.
Relating to this document: https://www.itead.cc/wiki/Nextion_Instruction_Set#get:_Get_variable.2Fconstant_value_with_format
I should be able to use "get "variable"" to receive the Information stored there.
I'd be happy if some1 could help me out here.
Solved it on my own:
For "get Start.currentPage.txt" you could insert any call for a variable you want, after that i just cut the part of interest from the string to keep, i dont need the begin and end of message symbols.
import time
from pynextion import PySerialNex
nexSerial=PySerialNex("COM4")
def getActPageName(nexSerial):
nexSerial.write("get Start.currentPage.txt")
time.sleep(0.1)
Var=str(nexSerial.read_all())
Var=Var[Var.find('p')+1:Var.find('\\')]
return Var

Read Binary string in Python, zlib

I want to store a large JSON (dict) from Python in dynamoDB.
After some investigation it seems that zlib is the way to go to get compression at a good level. Using below Im able to encode the dict.
ranking_compressed = zlib.compress(simplejson.dumps(response["Item"]["ranking"]).encode('utf-8'))
The (string?) then looks like this: b'x\x9c\xc5Z\xdfo\xd3....
I can directly decompress this and get the dict back with:
ranking_decompressed = simplejson.loads(str(zlib.decompress(ranking_compressed).decode('utf-8')))
All good so far. However, when putting this in dynamoDB and then reading it back using the same decompress code as above. The (string?) now looks like this:
Binary(b'x\x9c\xc5Z\xdf...
The error I get is:
bytes-like object is required, not 'Binary'
Ive tried accessing the Binary with e.g. .data but I cant reach it.
Any help is appreciated.
Boto3 Binary objects have a value property.
# in general...
binary_obj.value
# for your specific case...
ranking_decompressed = simplejson.loads(str(zlib.decompress(response["Item"]["ranking_compressed"].value).decode('utf-8')))
Oddly, this seems to be documented nowhere except the source code for the Binary class here

How do I get the XML format of Bugzilla given a bug ID using python and XML-RPC?

This question has been updated
I am writing a python script using the python-bugzilla 1.1.0 pypi. I am able to get all the bug IDs but I want to know if there is a way for me to access each bug's XML page? Here is the code I have so far:
bz = bugzilla.Bugzilla(url='https://bugzilla.mycompany.com/xmlrpc.cgi')
try:
bz.login('name#email.com', 'password');
print'Authorization cookie received.'
except bugzilla.BugzillaError:
print(str(sys.exc_info()[1]))
sys.exit(1)
#getting all the bug ID's and displaying them
bugs = bz.query(bz.build_query(assigned_to="your-bugzilla-account"))
for bug in bugs:
print bug.id
I don't know how to access each bug's XML page and not sure if it is even possible to do so. Can anyone help me with this? Thanks.
bz.getbugs()
Will get all bugs, bz.getbugssimple is also worth a look.
#!/usr/bin/env python
import bugzilla
bz = bugzilla.Bugzilla(url='https://bugzilla.company.com/xmlrpc.cgi')
bz.login('username#company.com', 'password')
results = bz.query(bz.url_to_query(queryUrl))
bids = []
for b in results:
bids.append(b.id)
print bids

Categories