How to add special categories in sparqlwrapper in python - python

I am using the following sparql query using sparqlwrapper as follows.
from SPARQLWrapper import SPARQLWrapper, JSON
sparql = SPARQLWrapper("http://live.dbpedia.org/sparql")
sparql.setReturnFormat(JSON)
my_category = 'dbc:Meteorological_concepts'
sparql.setQuery(f" ASK {{ {my_category} skos:broader{{1,3}} dbc:Medicine }} ")
results = sparql.query().convert()
print(results['boolean'])
As mentioned above it works fine with categories that do not have brackets (e.g., dbc:Meteorological_concepts). However, when I enter a category with brackets (i.e my_category = dbc:Elasticity_(physics)) I get the following error.
b"Virtuoso 37000 Error SP030: SPARQL compiler, line 4: syntax error at 'physics' before ')'\n\nSPARQL query:\ndefine sql:big-data-const 0 \n#output-format:application/sparql-results+json\n\n ASK { dbc:Elasticity_(physics) skos:broader{1,3} dbc:Medicine }\n"
CRITICAL: Exiting due to uncaught exception <class 'SPARQLWrapper.SPARQLExceptions.QueryBadFormed'>
Is there a way to resolve this issue.
I am happy to provide more details if needed.

I am rewriting what #StanislavKralin mentioned in the above comment. I always try to use full URL in the SPARQL code, particularly when there is special character in SPARQL query.
from SPARQLWrapper import SPARQLWrapper, JSON
sparql = SPARQLWrapper("http://live.dbpedia.org/sparql")
sparql.setReturnFormat(JSON)
my_category = '<http://dbpedia.org/resource/Category:Elasticity_(physics)>'
sparql.setQuery(f" ASK {{ {my_category} skos:broader{{1,3}} dbc:Medicine }} ")
results = sparql.query().convert()
print(results['boolean'])

Related

JSONDecodeError when convert OpenStreetMap query in jupyter notebook

I need help related to OpenStreetMap. I'm using python (jupyter notebook) to get data of hospitals in Bali area, Indonesia. Here is my code and query:
import pandas as pd
import requests
import json
overpass_api = "http://overpass-api.de/api/interpreter"
query_hospital = """
[out:json];
{{geocodeArea:'Provinsi Bali'}}->.searchArea;
node[amenity='hospital'](area.searchArea);
out;
"""
response_hospital = requests.get(overpass_api, params={'data':query_hospital})
but when I run the next code,
data_hospital = response_hospital.json()
it returns error JSONDecodeError: Expecting value: line 1 column 1 (char 0)
the query works well in Overpass Turbo but when I put in notebook, it returns error.
I've found the solution. Looks like python can't parse the double curly braces {{ }} in the openstreetmap query. So I modify the query into like this
query_hospital = """
[out:json];
area[name=Bali];
node[amenity='hospital'](area);
out;
"""
or if we use area name in local language
query_hospital = """
[out:json];
area['name:id'='Provinsi Bali'];
node[amenity='hospital'](area);
out;
"""
the query returns same result and now python can parse it.

How to compare schema of 2 databases with sqlalchemy-diff - Could not parse rfc1738 URL from string

I have Dev and Prod version of Azure SQL(or SQL Server) databases.
I would like to compare schema part of testing.
I read that sqlalchemy-diff could be useful tool. https://pypi.org/project/sqlalchemy-diff/
However I'm getting error from URL. I wonder what needs to be done?
CODE:
from pprint import pprint
from sqlalchemydiff import compare
DBURI1 = "Server=my-sql-server.database.windows.net,1433;Database=my-dev-
sql-db;UID=myuser;PWD=mypassword;Trusted_Connection=No;"
DBURI2 = "Server=my2-sql-server.database.windows.net,1433;Database=my2-dev-
sql-db;UID=myuser;PWD=mypassword;Trusted_Connection=No;"
result = compare(DBURI1, DBURI2)
if result.is_match:
print('Databases are identical')
else:
print('Databases are different')
pprint(result.errors)
ERROR:
sqlalchemy.exc.ArgumentError: Could not parse rfc1738 URL from string 'Server=my-sql-server.database.windows.net,1433;Database=my-dev-
sql-db;UID=myuser;PWD=mypassword;Trusted_Connection=No;'

SPARQLWrapper : problem in querying an ontology in a local file

I'm working with SPARQLWrapper and I'm following the documentation. Here is my code:
queryString = "SELECT * WHERE { ?s ?p ?o. }"
sparql = SPARQLWrapper("http://example.org/sparql")# I replaced this line with
sparql = SPARQLWrapper("file:///thelocation of my file in my computer")
sparql.setQuery(queryString)
try :
ret = sparql.query()
# ret is a stream with the results in XML, see <http://www.w3.org/TR/rdf-sparql-XMLres/>
except :
deal_with_the_exception()
I'm getting these 2 errors:
1- The system cannot find the path specified
2- NameError: name 'deal_with_the_exception' is not defined
You need a SPARQL endpoint to make it work. Consider setting up Apache Fuseki in your local computer. See https://jena.apache.org/documentation/fuseki2/jena

Difficulty inserting data into MySQL db using pymysql

I have written a little script using python3 that gets a RSS news feed using the feedparser library.
I then loop through the entries (dictionary) and then use a try/except block to insert the data into a MySQL db using pymysql (originally I tried to use MySQLDB but read here and other places that is does not work with Python3 or above)
I originally followed the PyMySQL example on git hub, however this did not work for me and I had to use different syntax for pymysql like they have here on digital ocean. However this worked for me when I tested out their example on their site.
But when I tried to incorporate it into my query,there was an error as it would not run the code the try block and just ran the exception code each time.
Here is my code;
#! /usr/bin/python3
# web_scraper.py 1st part of the project, to get the data from the
# websites and store it in a mysql database
import cgitb
cgitb.enable()
import requests,feedparser,pprint,pymysql,datetime
from bs4 import BeautifulSoup
conn = pymysql.connect(host="localhost",user="root",password="pass",db="stories",charset="utf8mb4")
c = conn.cursor()
def adbNews():
url = 'http://feeds.feedburner.com/adb_news'
d = feedparser.parse(url)
articles = d['entries']
for article in articles:
dt_obj = datetime.datetime.strptime(article.published,"%Y-%m-%d %H:%M:%S")
try:
sql = "INSERT INTO articles(article_title,article_desc,article_link,article_date) VALUES (%s,%s,%s,%s,%s)"
c.execute(sql,(article.title, article.summary,article.link,dt_obj.strftime('%Y-%m-%d %H:%M:%S'),))
conn.commit()
except Exception:
print("Not working")
adbNews()
I am not entirely sure what I am doing wrong. I have converted the string so that it is the format for the MySQL DATETIME type. As I originally did not have this but each time I run the program nothing gets stored in the db and the exception gets printed.
EDIT:
After reading Daniel Roseman's comments I removed the try/except block and read the errors that python gave me. It was to do with an extra argument in my sql query.
Here is he edited working code;
#! /usr/bin/python3
# web_scraper.py 1st part of the project, to get the data from the
# websites and store it in a mysql database
import cgitb
cgitb.enable()
import requests,feedparser,pprint,pymysql,datetime
from bs4 import BeautifulSoup
conn = pymysql.connect(host="localhost",user="root",password="pass",db="stories",charset="utf8mb4")
c = conn.cursor()
def adbNews():
url = 'http://feeds.feedburner.com/adb_news'
d = feedparser.parse(url)
articles = d['entries']
for article in articles:
dt_obj = datetime.datetime.strptime(article.published,"%Y-%m-%d %H:%M:%S")
#extra argument was here removed now
sql = "INSERT INTO articles(article_title,article_desc,article_link,article_date) VALUES (%s,%s,%s,%s)"
c.execute(sql,(article.title, article.summary,article.link,dt_obj.strftime('%Y-%m-%d %H:%M:%S'),))
conn.commit()
adbNews()

How can I properly serialize wikidata SPARQL queries answers?

I have the following example of querying wikidata via Python's SPARQLWrapper:
import rdflib, urllib
from SPARQLWrapper import SPARQLWrapper, JSON, XML, TURTLE, RDF, N3
from rdflib import Graph, Namespace, URIRef, RDF#, RDFS, Literal
def graph_full(uri, f)
sparql = SPARQLWrapper('https://query.wikidata.org/sparql')
sparql.setQuery('''
PREFIX entity: <http://www.wikidata.org/entity/>
SELECT ?predicate ?object WHERE {
<'''+urllib.unquote(uri).encode("utf8")+'''> ?predicate ?object .
} LIMIT 100
''')
sparql.setReturnFormat(N3)
results = sparql.query().convert()
#print results.serialize()
print type(results)
g = Graph()
g.parse(results)
print g
#g.serialize(f, format="n3")
if __name__ == '__main__':
graph_full("entity:Q76", "wikidata/output.nt")
I want to serialize the result of the SPARQL query and save it to a file. This seems to always throw the following error:
Exception: Unexpected type '<type 'instance'>' for source '<xml.dom.minidom.Document instance at 0x7fa11e3715a8>'
Using similar code against DBpedia SPARQL endpoints throws no erros.

Categories