OwlReady2 error after using consecutive load() - python

Been using owlready2 to parse multiple input OWL ontologies. The problem is: i get an error everytime i try to load the second ontology. If i only load one, everything works fine. Whenever i try to load the second i get an error associated with the owlready load() function:
SELECT x FROM transit""", (s, p, p)).fetchall(): yield x
sqlite3.OperationalError: near "WITH": syntax error
Relevant information:
on my machine, i can do as many loads as i want and it works fine
only when porting my code to a linux server of my department in order to get it deployed, this error happens.
Any sugestions?

Related

Timestamp object has no attribute 'split'

Sometimes when I'm running a python code in Google Colab and it runs in the first place, turns out that in the 2nd or 3rd attempt this same chunk of code for unknown reasons gives an error, as if the code was wrong (even though nothing has been modified). As soon as I disconnect and restart the notebook, the exact same chunk of code runs normally, again without modifications. Has anyone already come across this issue and know how to fix it?
import datetime #1st chunk
def convert_date(x): #2nd chunk
y= x.split(' ')[0]
return datetime.datetime(int(y.split('/')[2]),int(y.split('/')[1]),int(y.split('/')[0]))
hr['Hire Date'] = hr['Hire Date'].map(lambda x: convert_date(x)) #3rd chunk
When running the 3rd chunk it gives the error: AttributeError: 'Timestamp' object has no attribute 'split'
Is because you already applied the transformation to that column. Reload your data instead of restarting the kernel and it will work again.

Python - Unable to connect to 2 different databases like hive and iris in same python program

I am trying to connect to hive database and IRIS Intersystems Databases using jaydebeapi in python.
I am able to connect to one database at a time. While trying to connect to other database, I am getting the below error
"Class org.apache.hive.jdbc.HiveDriver is not found" or
"Class com.intersystems.jdbc.IRISDriver is not found"
lin1 - hive_con = jd.connect(java_driver_class, jdbc_conn_url, [hive_user, hive_pass],jarfile)
lin2 - iris_con = jd.connect(iris_driver_class, iris_conn_url, [iris_user, iris_pass],jarfile)
If I try to execute the above code, only the first lin1 executes and the other one gives out exception.
If I comment out the first line, then second line lin2 works fine.
I tried to close one connection before opening the other, still the issue is same.
I want the the both connections to work in same program.
Just pass in both jar files like this:
jar_files = [jar1, jar2]
lin1 - hive_con = jd.connect(java_driver_class, jdbc_conn_url, [hive_user, hive_pass],jar_files)
lin2 - iris_con = jd.connect(iris_driver_class, iris_conn_url, [iris_user, iris_pass],jar_files)

Getting an error code in Python that df is not defined

I am new to data, so after a few lessons on importing data in python, I tried the following codes in my jupter notebook but keep getting an error saying df not defined. I need help.
The code I wrote is as follows;
import pandas as pd
url = "https://api.worldbank.org/v2/en/indicator/SH.TBS.INCD?downloadformat=csv"
df = pd.read_csv(https://api.worldbank.org/v2/en/indicator/SH.TBS.INCD?downloadformat=csv)
After running the third code, I got a series of reports on jupter notebook but one that stood out was "df not defined"
The problem here is that your data is a ZIP file containing multiple CSV files. You need to download the data, unpack the ZIP file, and then read one CSV file at a time.
If you can give more details on the problem(etc: screenshots), debugging will become more easier
One possibility for the error is that the response content accessed by the url(https://api.worldbank.org/v2/en/indicator/SH.TBS.INCD?downloadformat=csv) is a zip file, which may prevent pandas from processing it further.

Loading previously saved JModelica result-file

I got the following question:
I am loading a JModelica model and simulate it easily by doing:
from pymodelica import compile_fmu
from pyfmi import load_fmu
model = load_fmu(SOME_FMU);
res=model.simulate();
Everything works fine and it even saves a resulting .txt - file. Now, with this .txt the problem is that I did not find any funtionality so far within the jmodelica-python packages to actually load such an .txt result file again later on into a result-object ( like the one being returned by simulate() ) to easily extract the previous saved data.
Implementing that by hand is of course possible but I find it quiet nasty and just wanted to ask if anyone knows of method that does the job to load that JModlica-format result-file into an result object for me.
Thanks!!!!
The functionality that you need is located in the io module:
from pyfmi.common.io import ResultDymolaTextual
res = ResultDymolaTextual("MyResult.txt")
var = res.get_variable_data("MyVariable")
var.x #Trajectory
var.t #Corresponding time vector

GeoIPIPSP.dat Invalid datebase type

we have a commercial maxmind-subscribtion to obtain a GeoIP-Database with ISP-information (GeioIPIPSP.dat). However, when I try to query this file, I keep getting the following error:
GeoIPError: Invalid database type, expected Org, ISP or ASNum
I'm using the python-api:
geo = GeoIP.open("/GeoIPIPSP.dat", GeoIP.GEOIP_STANDARD)
isp = geo.name_by_addr(ip) # or isp_by_addr with pygeoip
When I use the api to ask for the database-type (geo._type) I get "1" ... the same value I get when I open a regular GeoIP.dat. I'm wondering if there's something wrong with GeoIPISP.dat, but it's the most recent file from maxmind's customer-download-page.
Any insights greatly appreciated!
It turns out there was a problem with the database-file indeed. After a re-download everything works as it is supposed to.
I switched to pygeoip though and access the database like this:
import pygeoip
geo_isp = pygeoip.GeoIP("/usr/share/GeoIP/GeoIPIPSP.dat")
isp = geo_isp.isp_by_addr("8.8.8.8")

Categories