Connect to Filemaker Database using JDBC, Python, and JayDeBeApi - python

I'm trying to write an AWS Lambda Python Package that will connect to a FileMaker database over JDBC. To test, I've launched an EC2 instance with the Lambda Linux AMI, and created a virtualenv (/venv) that I'm testing in. I've uploaded the fmjdbc.jar to the instance using WinSCP to /venv/lib/fmjdbc.jar. The code uses JayDeBeApi, following the usage example here: https://pypi.python.org/pypi/JayDeBeApi/#usage
My code so far is the following:
import jaydebeapi as jdb
driverclass = 'com.filemaker.jdbc.Driver'
jdbcURL = 'jdbc:filemaker://url:port;database'
jar = '/home/ec2-user/lambda-test-project/venv/lib/fmjdbc.jar'
print jar
conn = jdb.connect(driverclass,[jdbcURL,'username','password'],jar)
Which gives me the error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ec2-user/lambda-test-project/venv/local/lib/python2.7/site-package s/jaydebeapi/__init__.py", line 359, in connect
jconn = _jdbc_connect(jclassname, jars, libs, *driver_args)
File "/home/ec2-user/lambda-test-project/venv/local/lib/python2.7/site-package s/jaydebeapi/__init__.py", line 183, in _jdbc_connect_jpype
return jpype.java.sql.DriverManager.getConnection(*driver_args)
jpype._jexception.SQLExceptionPyRaisable: java.sql.SQLException: No suitable driver found for jdbc:filemaker://<MY URL STUFF IS HERE>
How can I get the jdbc driver to be read by Python's virtual environment? I'd like to have this code work in a Lambda package eventually, so I'm hoping there's a solution that can be integrated to the Python code that will work repeatedly on newly created servers.

You can use jpype package to set driver for python. I used it for connecting Oracle DB before. There is my sample code which may be useful for you.
import jaydebeapi,jpype
classpath = "your jdbc jar driver path"
jvm_path = "/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.36.x86_64/jre/lib/amd64/server/libjvm.so" #your java vm path
jpype.startJVM(jvm_path, "-Djava.class.path=%s" % classpath) #start jvm based on the driver
conn = jaydebeapi.connect(xxxxxx)

Related

Call speedtest.Speedtest() from Python using --secure (to avoid speedtest.ConfigRetrievalError: HTTP Error 403: Forbidden)

I have a small Python3-script like this:
import speedtest
# Speedtest
test = speedtest.Speedtest() # <--- line 4
test.get_servers()
best = test.get_best_server()
print(f"Found: {best['host']} located in {best['country']}")
The first time I run it, it works and everything is fine; it outputs:
Found: speedtest.witcom.cloud:8080 located in Germany
Happy days.
The second time (and subsequel times) that I run the script, I get this error:
Traceback (most recent call last):
File "/Users/zeth/Code/pinger/pinger.py", line 4, in <module>
test = speedtest.Speedtest()
File "/usr/local/lib/python3.9/site-packages/speedtest.py", line 1095, in __init__
self.get_config()
File "/usr/local/lib/python3.9/site-packages/speedtest.py", line 1127, in get_config
raise ConfigRetrievalError(e)
speedtest.ConfigRetrievalError: HTTP Error 403: Forbidden
When Googling around, I saw that I could also call this module straight from the command line, but just running this:
$ speedtest-cli
That gives me the same kind of error:
Retrieving speedtest.net configuration...
Cannot retrieve speedtest configuration
ERROR: HTTP Error 403: Forbidden
But if I run the direct cli-command: speedtest-cli --secure ( docs for the --secure-flag ), then it goes through and outputs this:
Retrieving speedtest.net configuration...
Testing from Deutsche Telekom AG (212.185.228.168)...
Retrieving speedtest.net server list...
Selecting best server based on ping...
Hosted by hotspot.koeln (Cologne) [3.44 km]: 28.805 ms
Testing download speed................................................................................
Download: 30.01 Mbit/s
Testing upload speed......................................................................................................
Upload: 8.68 Mbit/s
The question
I can't figure out how to change this Python-line: test = speedtest.Speedtest() to use a --secure-flag (nor via HTTPS).
The documentation for speedtest-cli is scarce.
Other attempts
I found this solution here: Python Speedtest facing problems with certification _ssl.c:1056, that suggests manually approving the certificates.
But in this directory: /Volumes/Macintosh HD/Applications/ I don't have anything called Python3.9. I have python3.9 installed via Brew. And I'm on a Mac.
I could do this:
test = speedtest.Speedtest(secure=True)
I looked into the source code myself, in this directory:
vim /usr/local/lib/python3.9/site-packages/speedtest.py
Where I would see the function was defined like this:
class Speedtest(object):
"""Class for performing standard speedtest.net testing operations"""
def __init__(self, config=None, source_address=None, timeout=10,
secure=False, shutdown_event=None):
self.config = {}
self._source_address = source_address
self._timeout = timeout
self._opener = build_opener(source_address, timeout)
self._secure = secure
...
...
...

AWS Glue Python Shell package import

We create a python shell job which is connecting Redshift and fetching data, below program is working fine in my local system.
Below are the steps and programs.
Program:-
import sqlalchemy as sa
from sqlalchemy.orm import sessionmaker
#>>>>>>>> MAKE CHANGES HERE <<<<<<<<<<<<<
DATABASE = "#####"
USER = "#####"
PASSWORD = "#####"
HOST = "#####.redshift.amazonaws.com"
PORT = "5439"
SCHEMA = "test" #default is "public"
####### connection and session creation ##############
connection_string = "redshift+psycopg2://%s:%s#%s:%s/%s" % (USER,PASSWORD,HOST,str(PORT),DATABASE)
engine = sa.create_engine(connection_string)
session = sessionmaker()
session.configure(bind=engine)
s = session()
SetPath = "SET search_path TO %s" % SCHEMA
s.execute(SetPath)
###### All Set Session created using provided schema #######
################ write queries from here ######################
query = "SELECT * FROM test1 limit 2;"
rr = s.execute(query)
all_results = rr.fetchall()
def pretty(all_results):
for row in all_results :
print("row start >>>>>>>>>>>>>>>>>>>>")
for r in row :
print(" ----" , r)
print("row end >>>>>>>>>>>>>>>>>>>>>>")
pretty(all_results)
########## close session in the end ###############
s.close()
Steps:-
sudo pip install psycopg2
sudo pip install sqlalchemy
sudo pip install sqlalchemy-redshift
I have uploaded the files psycopg2-2.8.4-cp27-cp27m-win32.whl, Flask_SQLAlchemy-2.4.1-py2.py3-none-any.whl and sqlalchemy_redshift-0.7.5-py2.py3-none-any.whl in S3 (s3://####/lib/), and map the folder in Python library path in AWS Glue Job.
When I run the program below error is occurring.
Traceback (most recent call last):
File "/tmp/runscript.py", line 113, in <module>
download_and_install(args.extra_py_files)
File "/tmp/runscript.py", line 56, in download_and_install
download_from_s3(s3_file_path, local_file_path)
File "/tmp/runscript.py", line 81, in download_from_s3
s3.download_file(bucket_name, s3_key, new_file_path)
File "/usr/local/lib/python2.7/site-packages/boto3/s3/inject.py", line 172, in download_file
extra_args=ExtraArgs, callback=Callback)
File "/usr/local/lib/python2.7/site-packages/boto3/s3/transfer.py", line 307, in download_file
future.result()
File "/usr/local/lib/python2.7/site-packages/s3transfer/futures.py", line 106, in result
return self._coordinator.result()
File "/usr/local/lib/python2.7/site-packages/s3transfer/futures.py", line 265, in result
raise self._exception
botocore.exceptions.ClientError: An error occurred (404) when calling the HeadObject operation: Not Found
PS:- The Glue Job Role has full access to S3.
Please suggest how to map those libraries with the program.
You can specify your own Python libraries packaged as an .egg or a .whl file under the "—extra-py-files" flag as shown in below example.
Command line example :
aws glue create-job --name python-redshift-test-cli --role role --command '{"Name" : "pythonshell", "ScriptLocation" : "s3://MyBucket/python/library/redshift_test.py"}'
--connections Connections=connection-name --default-arguments '{"--extra-py-files" : ["s3://MyBucket/python/library/redshift_module-0.1-py2.7.egg", "s3://MyBucket/python/library/redshift_module-0.1-py2.7-none-any.whl"]}'
Refernece : Create a glue job with extra python library
There is a simple way to import python dependencies using whl files, that can be find on Python site for particular module.
You can also add multiple wheel files from S3 using comma.
For eg
"s3://xxxxxxxxx/common/glue/glue_whl/fastparquet-0.4.1-cp37-cp37m-macosx_10_9_x86_64.whl,s3://xxxxxx/common/glue/glue_whl/packaging-20.4-py2.py3-none-any.whl,s3://xxxxxx/common/glue/glue_whl/s3fs-0.5.0-py3-none-any.whl"
enter image description here

Accessing OrientDB from Python

I want to convert a >1mn record MySQL database into a graph database, because it is heavily linked network-type data. The free version of Neo4J had some restrictions I thought I might bump up against, so I've installed OrientDB (Community 2.2.0) (on Ubuntu Server 16.04) and got it working. Now I need to access it from Python (3.5.1+), so I'm trying pyorient (1.5.2). (I tried TinkerPop since I eventually want to use Gremlin, and couldn't get the gremlin console to talk to the OrientDB.)
The following simple Python code, to connect to one of the test graphs in OrientDB:
import pyorient
username="user"
password="password"
client = pyorient.OrientDB("localhost", 2424)
session_id = client.connect( username, password )
print("SessionID=",session_id)
db_name="GratefulDeadConcerts"
if client.db_exists( db_name, pyorient.STORAGE_TYPE_MEMORY ):
print("Database",db_name,"exists")
client.db_open( db_name, username, password )
else:
print("Database",db_name,"doesn't exist")
gives a weird error:
SessionID= 27
Database GratefulDeadConcerts exists
Traceback (most recent call last):
File "FirstTest.py", line 18, in <module>
client.db_open( db_name, username, password )
File "/home/tom/MyProgs/TestingPyOrient/env/lib/python3.5/site-packages/pyorient/orient.py", line 379, in db_open
.prepare((db_name, user, password, db_type, client_id)).send().fetch_response()
File "/home/tom/MyProgs/TestingPyOrient/env/lib/python3.5/site-packages/pyorient/messages/database.py", line 141, in fetch_response
info = OrientVersion(release)
File "/home/tom/MyProgs/TestingPyOrient/env/lib/python3.5/site-packages/pyorient/otypes.py", line 202, in __init__
self._parse_version(release)
File "/home/tom/MyProgs/TestingPyOrient/env/lib/python3.5/site-packages/pyorient/otypes.py", line 235, in _parse_version
self.build = int( self.build )
ValueError: invalid literal for int() with base 10: '0 (build develop#r79d281140b01c0bc3b566a46a64f1573cb359783; 2016'
Does anyone know what that is or how I can fix it? Should I really be using TinkerPop instead? If so I'll post a seperate question about my struggles with that.
I firstly got the error, but after upgrading Pyorient to last version 1.5.4 I get no errors.
$ python test.py
('SessionID=', 6)
('Database', 'GratefulDeadConcerts', 'exists')
$ python --version
Python 2.7.11

Launch a virtual machine remotely with Python VirtualBox API

I'm new to VirtualBox API and I'm trying to launch a virtual machine remotely via VBoxWebSrv.exe, which is running locally (for testing).
I've done this so far:
from vboxapi import *
params = {'url' : 'http://localhost:18083',
'user' : 'user',
'password' : 'password'
}
webmgr = VirtualBoxManager('WEBSERVICE', params)
vbox = webmgr.getVirtualBox()
machines = vbox.getMachines()
for mach in machines:
session = webmgr.getSessionObject(vbox)
progress = mach.launchVMProcess(session, "gui", "")
but it crashes when it comes to the launchVMProcess method. I'm getting this error:
Traceback (most recent call last):
File "C:\Users\user\git\VirtualBox-Manager\VirtualBox_Manager\src\test.py", line 45, in <module>
progress = mach.launchVMProcess(session, "", "")
File "C:\Program Files\Oracle\VirtualBox\sdk\bindings\webservice\python\lib\VirtualBox_wrappers.py", line 1801, in __getattr__
return IUnknown.__getattr__(self, name)
File "C:\Program Files\Oracle\VirtualBox\sdk\bindings\webservice\python\lib\VirtualBox_wrappers.py", line 388, in __getattr__
raise AttributeError
AttributeError
It is weird that this works just fine when I use COM (=without VBoxServer.exe). It seems that the method is not implemented for webservice or there is missing reference to the method or I don't know.
I use the newest SDK (5.0.14) together with VirtualBox 5.0.14 and the host machine is Windows 8.1 64-bit.
Is there any way to solve this?
Thank you very much for any ideas, I'm really stuck here.
As I thought, the webservice of SDK 5.0.14 is bugged and can't be used properly. This issue has been fixed with SDK 5.0.16, which has been released today.

pyodbc: .mdb connection error on ubuntu

I am trying to access a .mdb file which located on my system. My Code look slike this :
import csv
import pyodbc
MDB = '/home/filebug/client/my.mdb'
DRV = '{Microsoft Access Driver (*.mdb)}'
PWD = 'mypassword'
conn = pyodbc.connect('DRIVER=%s;DBQ=%s;PWD=%s' % (DRV,MDB,PWD))
print conn
curs = conn.cursor()
SQL = 'SELECT * FROM InOutTable;' # insert your query here
curs.execute(SQL)
rows = curs.fetchall()
curs.close()
conn.close()
But I am facing following Error :
Traceback (most recent call last):
File "mdb.py", line 8, in <module>
conn = pyodbc.connect('DRIVER=%s;DBQ=%s;PWD=%s' % (DRV,MDB,PWD))
pyodbc.Error: ('IM002', '[IM002] [unixODBC][Driver Manager]Data source name not found, and no default driver specified (0) (SQLDriverConnect)')
I have pyodbc-3.0.6-py2.7-linux-i686.egg installed on my system. using Ubuntu 12.04
Can any one sya me what is wrong here
I get this info from http://www.easysoft.com/developer/interfaces/odbc/linux.html
In this case unixODBC could not locate the DSN "dsn_does_not_exist" and hence could not load the ODBC driver. Common reasons for this error are:
The DSN "dsn_does_not_exist" does not exist in your USER or SYSTEM ini files.
The DSN "dsn_does_not_exist" does exist in a defined ini file but you have omitted the "Driver=xxx" attribute telling the unixODBC driver manager which ODBC driver to load.
The "Driver=/path_to_driver" in the odbcinst.ini file points to an invalid path, to a path to an executable where part of the path is not readable/searchable or to a file that is not loadable (executable).
The Driver=xxx entry points to a shared object which does not export the necessary ODBC API functions (you can test this with dltest included with unixODBC.
The ODBC driver defined by DRIVER=xxx in the odbcinst.ini file depends on other shared objects which are not on your dynamic linker search path. Run ldd on the driver shared object named by Driver= in the odbcinst.ini file and see what dependent shared objects cannot be found. If some cannot be found than you need to defined your LD_LIBRARY_PATH environment variable to define the paths to any dependent shared objects or add these paths to /etc/ld.so.conf and rerun ldconfig.

Categories