Connect to JDBC python - python

I am try to connect to my gaussdb or even postgressql using in python jaydebeapi from linux
and i keep get error
Class name not found
I copy my jar file to /usr/lib/jvm/java-11-openjdk-amd46/lib/com.driver.jar
is there something else ?
‘’’
import jaydebeapi
import sys
jaydebeapi.connect("com.gauss.Driver",
url, [username, password], "./file-jdbc.jar")
Erorr Class com.gauss.driver not found
‘’’

You can try to explicitly start a JVM and pass it the fullpath of your driver jar file:
import jaydebeapi
import jpype
jpype.startJVM(jpype.getDefaultJVMPath(),
f"-Djava.class.path=/usr/lib/jvm/java-11-openjdk-amd46/lib/com.driver.jar")
jaydebeapi.connect("com.gauss.Driver", url, [username, password], "/usr/lib/jvm/java-11-openjdk-amd46/lib/")
If you're using postgresSQL databases I would also suggest you to take a look to the psycopg library:
https://pypi.org/project/psycopg/
https://www.geeksforgeeks.org/introduction-to-psycopg2-module-in-python/

Related

Using OracleDB OS.Environment Password

I am trying to connect to an oracle database with Python code. I am using the OracleDB package but want it so that the user is able to connect to the DB with their own password machine and password rather than coding it into the code itself.
So far I have this,
import oracledb
import os
username=os.environ.get("Username")
pw=os.environ.get("pasword")
conn = oracledb.connect(user=username, password=pw, host="url", port=0000, service_name="service"
Source the environment variables (Make them available to the python process)
$cat env.sh
export USERNAME=app_schema
export PASSWORD=secret
$cat connect.py
import oracledb
import os
username=os.environ.get("USERNAME")
pw=os.environ.get("PASSWORD")
conn = oracledb.connect(user=username, password=pw, host="localhost", port=1521, service_name="XEPDB1")
c = conn.cursor()
c.execute('select dummy from dual')
for row in c:
print (row[0])
conn.close()
$ # source the variables (note the dot)
$. env.sh
$python connect.py
X
Best of luck!

PYinstaller doesn't convert working python code into working exe

I'm trying to get a code scheduled in windows:
import requests
from datetime import date
import pandas as pd
import sqlalchemy
from sqlalchemy import create_engine
server= 'xxxxx'
database='xxxxx'
driver_sql= 'ODBC Driver 17 for SQL Server'
database_con= f'mssql://#{server}/{database}?driver={driver_sql}'
engine= sqlalchemy.create_engine(database_con)
connection = engine.connect()
adr='https://api.swaggystocks.com/wsb/sentiment/ticker'
r = requests.get(adr)
json = r.json()
pdbasses= pd.DataFrame(json["data"])
pdbasses["timestamp"] = pd.to_datetime("today")
pdbasses["timestamp"] = pdbasses["timestamp"].dt.strftime("%Y-%m-%d")
pdbasses.to_sql("swaggy", connection, if_exists='append',index=False)
direct =r'D:\SQL\getdatafromhere\file'+str(date.today())+".csv"
pdbasses.to_csv(direct, index=False)
the problem is, when I put it through pyinstaller it says I'm missing packages that I can't even install via pip MySQLdb,Psycob2 etc.
Then when the exe file gets put out, I run it and it just blips for a second and doesn't do anything. The code works in PyCharm. I tried running pyinstaller through pycharm, but I get a Linux exe file/spec file. I tried converting that one to an exe file, but still doens't work.
Thank you for your time

How to check If Path Exists Using Fabric2.x

I am using Fabric2 version and I don't see It has exist method in it to check if folder path has existed in the remote server. Please let me know how can I achieve this in Fabric 2 http://docs.fabfile.org/en/stable/.
I have seen a similar question Check If Path Exists Using Fabric, But this is for fabric 1.x version
You can execute the test command remotely with the -d option to test if the file exist and is a directory while passing the warn parameter to the run method so the execution doesn't stop in case of a non-zero exit status code. Then the value failed on the result will be True in case that the folder doesn't exist and False otherwise.
folder = '/path/to/folder'
if c.run('test -d {}'.format(folder), warn=True).failed:
# Folder doesn't exist
c.run('mkdir {}'.format(folder))
exists method from fabric.contrib.files was moved to patchwork.files with a small signature change, so you can use it like this:
from fabric2 import Connection
from patchwork.files import exists
conn = Connection('host')
if exists(conn, SOME_REMOTE_DIR):
do_something()
The below code is to check the existence of the file (-f), just change to '-d' to check the existence of a directory.
from fabric import Connection
c = Connection(host="host")
if c.run('test -f /opt/mydata/myfile', warn=True).failed:
do.thing()
You can find it in the Fabric 2 documentation below:
https://docs.fabfile.org/en/2.5/getting-started.html?highlight=failed#bringing-it-all-together
Hi That's not so difficult, you have to use traditional python code to check if a path already exists.
from pathlib import Path
from fabric import Connection as connection, task
import os
#task
def deploy(ctx):
parent_deploy_dir = '/var/www'
deploy_dir ='/var/www/my_folder'
host = 'REMOTE_HOST'
user = 'USER'
with connection(host=host, user=user) as c:
with c.cd(parent_deploy_dir):
if not os.path.isdir(Path(deploy_dir)):
c.run('mkdir -p ' + deploy_dir)

Django no module named elasticsearch_dsl.connections

I'm trying to connect my Django model to the Elasticsearch server on local host but when I try
from elasticsearch_dsl.connections import connections
I get the error "ImportError: No module named elasticsearch_dsl.connections".
When I use this same command in the Django shell, it works fine.
search.py
from elasticsearch_dsl.connections import connections
from elasticsearch_dsl import DocType, Text, Date, Boolean, Integer, Keyword, fields
from elasticsearch.helpers import bulk
from elasticsearch import Elasticsearch
from .models import HomeGym, Country, Rating
connections.create_connection()
class HomeGymIndex(DocType):
title = Text()
price = fields.FloatField()
tags = Keyword()
city = Text()
country = Text()
rate = Integer()
opusApproved = Boolean()
def bulk_indexing():
HomeGymIndex.init()
es = Elasticsearch()
bulk(client=es, actions=(b.indexing() for b in HomeGym.objects.all().iterator()))
This leads to an ImportError on line 1. "No module named elasticsearch_dsl.connections"
The same import statement works in the shell though.
I've already done a pip install of elasticsearch and elasticsearch-dsl inside my virtualenv.
Here is the file structure
my_website/
elasticsearch/
#elasticsearch files pulled from github
elasticsearch-5.5.2-SNAPSHOT/
#elasticsearch files
bin/
elasticsearch
opus/
manage.py
homegymlistings/
models.py
search.py
#other standard app files
opus/
#standard files for main django branch
my_virtualenv/
bin/
activate
Why does my import statement only fail when called inside the search.py file located inside the homegymlistings app?
Run this
pip install elasticsearch_dsl
Apparently I had to pip install elasticsearch and elasticsearch-dsl outside of the virtualenv. The error went away after that.

Connect to Filemaker Database using JDBC, Python, and JayDeBeApi

I'm trying to write an AWS Lambda Python Package that will connect to a FileMaker database over JDBC. To test, I've launched an EC2 instance with the Lambda Linux AMI, and created a virtualenv (/venv) that I'm testing in. I've uploaded the fmjdbc.jar to the instance using WinSCP to /venv/lib/fmjdbc.jar. The code uses JayDeBeApi, following the usage example here: https://pypi.python.org/pypi/JayDeBeApi/#usage
My code so far is the following:
import jaydebeapi as jdb
driverclass = 'com.filemaker.jdbc.Driver'
jdbcURL = 'jdbc:filemaker://url:port;database'
jar = '/home/ec2-user/lambda-test-project/venv/lib/fmjdbc.jar'
print jar
conn = jdb.connect(driverclass,[jdbcURL,'username','password'],jar)
Which gives me the error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ec2-user/lambda-test-project/venv/local/lib/python2.7/site-package s/jaydebeapi/__init__.py", line 359, in connect
jconn = _jdbc_connect(jclassname, jars, libs, *driver_args)
File "/home/ec2-user/lambda-test-project/venv/local/lib/python2.7/site-package s/jaydebeapi/__init__.py", line 183, in _jdbc_connect_jpype
return jpype.java.sql.DriverManager.getConnection(*driver_args)
jpype._jexception.SQLExceptionPyRaisable: java.sql.SQLException: No suitable driver found for jdbc:filemaker://<MY URL STUFF IS HERE>
How can I get the jdbc driver to be read by Python's virtual environment? I'd like to have this code work in a Lambda package eventually, so I'm hoping there's a solution that can be integrated to the Python code that will work repeatedly on newly created servers.
You can use jpype package to set driver for python. I used it for connecting Oracle DB before. There is my sample code which may be useful for you.
import jaydebeapi,jpype
classpath = "your jdbc jar driver path"
jvm_path = "/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.36.x86_64/jre/lib/amd64/server/libjvm.so" #your java vm path
jpype.startJVM(jvm_path, "-Djava.class.path=%s" % classpath) #start jvm based on the driver
conn = jaydebeapi.connect(xxxxxx)

Categories