I want to load OSM data into PostgreSQL database using Python script. When I am trying this Python script, it doesn't load OSM data into database. Can anyone guide me? I know how to load data in PostgreSQL using osmosis but currently I am looking for something using Python.
import psycopg2
from osgeo import ogr
# connect to the database
connection = psycopg2.connect(user="postgres",
password="password",
host="localhost",
database="example")
# create cursor
cursor = connection.cursor()
cursor.execute("DROP TABLE IF EXISTS trial1")
cursor.execute("CREATE TABLE trial1 (id SERIAL PRIMARY KEY, geom Geometry)")
cursor.execute("CREATE INDEX trial1_index ON trial1 USING GIST(geom)")
print("Successfully created ")
connection.commit()
# define OSMfile path
osm = ogr.Open("pedestrian.osm")
layer = osm.GetLayer(1)
# delete the existing contents of the table
cursor.execute("DELETE FROM trial1")
print(str(layer.GetFeatureCount()))
for i in range(layer.GetFeatureCount()):
feature = layer.GetFeature(i)
# Get feature geometry
geometry = feature.GetGeometryRef()
wkt = geometry.ExportToWkt()
# Insert data into database,
cursor.execute("INSERT INTO trial1 (geom) VALUES (ST_GeomFromText(" + "'" + wkt + "', 4326))")
print("Data inserted successfully")
connection.commit()
expected result:This code should create table in database and also import data into it from given file.currently this code is just creating table .after table creation it dont give error ,it says Process finished with exit code 0.
Related
From Jupiter notebook, I was able to create Database with Psycopg2.
But somehow I was not able to create Table and store element in it.
import psycopg2
from psycopg2.extensions import ISOLATION_LEVEL_AUTOCOMMIT
con = psycopg2.connect("user=postgres password='abc'");
con.set_isolation_level(ISOLATION_LEVEL_AUTOCOMMIT);
cursor = con.cursor();
name_Database = "socialmedia";
sqlCreateDatabase = "create database "+name_Database+";"
cursor.execute(sqlCreateDatabase);
With the above code, I can see database named "socialmedia" from psql (windows command prompt).
But with the below code, I can not see table named "test_table" from psql.
import psycopg2
# Open a DB session
dbSession = psycopg2.connect("dbname='socialmedia' user='postgres' password='abc'");
# Open a database cursor
dbCursor = dbSession.cursor();
# SQL statement to create a table
sqlCreateTable = "CREATE TABLE test_table(id bigint, cityname varchar(128), latitude numeric, longitude numeric);";
# Execute CREATE TABLE command
dbCursor.execute(sqlCreateTable);
# Insert statements
sqlInsertRow1 = "INSERT INTO test_table values(1, 'New York City', 40.73, -73.93)";
sqlInsertRow2 = "INSERT INTO test_table values(2, 'San Francisco', 37.733, -122.446)";
# Insert statement
dbCursor.execute(sqlInsertRow1);
dbCursor.execute(sqlInsertRow2);
# Select statement
sqlSelect = "select * from test_table";
dbCursor.execute(sqlSelect);
rows = dbCursor.fetchall();
# Print rows
for row in rows:
print(row);
I can get elements only from Jupyter notebook, but not from psql.
And it seems elements are stored temporarily.
How can I see table and elements from psql and keep the element permanently?
I don't see any dbCursor.execute('commit') in the second part of your question?
You have provided an example with AUTOCOMMIT which works, and you are asking why results are stored temporarly when you are not using AUTOCOMMIT?
Well, they are not commited!
They are stored only for the current session, that's why you can get it from your Jupyter session
Also:
you don't need to put semicolons in your python code
you don't need to put semicolons in your SQL code (except when you execute multiple statements, which is not the case here)
My MSSQL DB table contains following structure:
create table TEMP
(
MyXMLFile XML
)
Using Python, I a trying to load locally stored .XML file into MS SQL DB (No XML Parsing Required)
Following is Python code:
import pyodbc
import xlrd
import xml.etree.ElementTree as ET
print("Connecting..")
# Establish a connection between Python and SQL Server
conn = pyodbc.connect('Driver={SQL Server};'
'Server=TEST;'
'Database=test;'
'Trusted_Connection=yes;')
print("DB Connected..")
# Get XMLFile
XMLFilePath = open('C:HelloWorld.xml')
# Create Table in DB
CreateTable = """
create table test.dbo.TEMP
(
XBRLFile XML
)
"""
# execute create table
cursor = conn.cursor()
try:
cursor.execute(CreateTable)
conn.commit()
except pyodbc.ProgrammingError:
pass
print("Table Created..")
InsertQuery = """
INSERT INTO test.dbo.TEMP (
XBRLFile
) VALUES (?)"""
# Assign values from each row
values = (XMLFilePath)
# Execute SQL Insert Query
cursor.execute(InsertQuery, values)
# Commit the transaction
conn.commit()
# Close the database connection
conn.close()
But the code is storing the XML path in MYXMLFile column and not the XML file. I referred lxml library and other tutorials. But, I did not encountered straight forward approach to store file.
Please can anyone help me with it. I have just started working on Python.
Here, is solution to load .XML file directly into MS SQL SB using Python.
import pyodbc
import xlrd
import xml.etree.ElementTree as ET
print("Connecting..")
# Establish a connection between Python and SQL Server
conn = pyodbc.connect('Driver={SQL Server};'
'Server=TEST;'
'Database=test;'
'Trusted_Connection=yes;')
print("DB Connected..")
# Get XMLFile
XMLFilePath = open('C:HelloWorld.xml')
x = etree.parse(XBRLFilePath) # Updated Code line
with open("FileName", "wb") as f: # Updated Code line
f.write(etree.tostring(x)) # Updated Code line
# Create Table in DB
CreateTable = """
create table test.dbo.TEMP
(
XBRLFile XML
)
"""
# execute create table
cursor = conn.cursor()
try:
cursor.execute(CreateTable)
conn.commit()
except pyodbc.ProgrammingError:
pass
print("Table Created..")
InsertQuery = """
INSERT INTO test.dbo.TEMP (
XBRLFile
) VALUES (?)"""
# Assign values from each row
values = etree.tostring(x) # Updated Code line
# Execute SQL Insert Query
cursor.execute(InsertQuery, values)
# Commit the transaction
conn.commit()
# Close the database connection
conn.close()
print ('Files in Drive:')
!ls drive/AI
Files in Drive:
database.sqlite
Reviews.csv
Untitled0.ipynb
fine_food_reviews.ipynb
Titanic.csv
When I run the above code in Google Colab, clearly my sqlite file is present in my drive. But whenever I run some query on this file, it says
# using the SQLite Table to read data.
con = sqlite3.connect('database.sqlite')
#filtering only positive and negative reviews i.e.
# not taking into consideration those reviews with Score=3
filtered_data = pd.read_sql_query("SELECT * FROM Reviews WHERE Score !=3",con)
DatabaseError: Execution failed on sql 'SELECT * FROM Reviews WHERE
Score != 3 ': no such table: Reviews
Below you will find code that addresses the db setup on the Colab VM, table creation, data insertion and data querying. Execute all code snippets in individual notebook cells.
Note however that this example only shows how to execute the code on a non-persistent Colab VM. If you want to save your database to GDrive you will have to mount your Gdrive first (source):
from google.colab import drive
drive.mount('/content/gdrive')
and navigate to the appropriate file directory after.
Step 1: Create DB
import sqlite3
conn = sqlite3.connect('SQLite_Python.db') # You can create a new database by changing the name within the quotes
c = conn.cursor() # The database will be saved in the location where your 'py' file is saved
# Create table - CLIENTS
c.execute('''CREATE TABLE SqliteDb_developers
([id] INTEGER PRIMARY KEY, [name] text, [email] text, [joining_date] date, [salary] integer)''')
conn.commit()
Test whether the DB was created successfully:
!ls
Output:
sample_data SQLite_Python.db
Step 2: Insert Data Into DB
import sqlite3
try:
sqliteConnection = sqlite3.connect('SQLite_Python.db')
cursor = sqliteConnection.cursor()
print("Successfully Connected to SQLite")
sqlite_insert_query = """INSERT INTO SqliteDb_developers
(id, name, email, joining_date, salary)
VALUES (1,'Python','MakesYou#Fly.com','2020-01-01',1000)"""
count = cursor.execute(sqlite_insert_query)
sqliteConnection.commit()
print("Record inserted successfully into SqliteDb_developers table ", cursor.rowcount)
cursor.close()
except sqlite3.Error as error:
print("Failed to insert data into sqlite table", error)
finally:
if (sqliteConnection):
sqliteConnection.close()
print("The SQLite connection is closed")
Output:
Successfully Connected to SQLite
Record inserted successfully into SqliteDb_developers table 1
The SQLite connection is closed
Step 3: Query DB
import sqlite3
conn = sqlite3.connect("SQLite_Python.db")
cur = conn.cursor()
cur.execute("SELECT * FROM SqliteDb_developers")
rows = cur.fetchall()
for row in rows:
print(row)
conn.close()
Output:
(1, 'Python', 'MakesYou#Fly.com', '2020-01-01', 1000)
Try this instead. See what tables are there.
"SELECT name FROM sqlite_master WHERE type='table'"
give similar sharable id to your database file just like you did with Reviews.csv
database_file=drive.CreateFile({'id':'your_sharable_id for sqlite file'})
database_file.GetContentFile('database.sqlite')
If you are trying to access the files from your google drive, you need to mount the drive first:
from google.colab import drive
drive.mount('/content/drive')
After you do this, right click on the file that you intend to read in colab session and select 'Copy Path'and paste it in the connection string.
con = sqlite3.connect('/content/database.sqlite')
You can now read the file.
con = sqlite3.connect('database.sqlite')
filtered_data = pd.read_sql_query("SELECT * FROM Reviews WHERE Score !=3",con)
If you are executing it twice you will definitely end with this type of error.Execute it exactly once without any fail.
If you get any error then remove
database.sqlite
this file and extract it again.This time execute it again without any fail/error .This worked for me .
I am on Linux platform with Cassandra database. I want to insert Images data into Cassandra database using Python Code from a remote server. Previously, I had written a python code that inserts Images' data into MySQL database from a remote server. Please see the code below for MySQL
#!/usr/bin/python
# -*- coding: utf-8 -*-
import MySQLdb as mdb
import psycopg2
import sys
import MySQLdb
def read_image(i):
filename="/home/faban/Downloads/Python/Python-Mysql/images/im"
filename=filename+str(i)+".jpg"
print(filename)
fin = open(filename)
img = fin.read()
return img
con = MySQLdb.connect("192.168.50.12","root","faban","experiments" )
with con:
print('connecting to database')
range_from=input('Enter range from:')
range_till=input('Enter range till:')
for i in range(range_from,range_till):
cur = con.cursor()
data = read_image(i)
cur.execute("INSERT INTO images VALUES(%s, %s)", (i,data, ))
cur.close()
con.commit()
con.close()
This code successfully inserts data into MySQL database which is located at .12
I want to modify the same code to insert data into Cassandra database which is also located at .12
Please help me out in this regard.
If I create a simple table like this:
CREATE TABLE stackoverflow.images (
name text PRIMARY KEY,
data blob);
I can load those images with Python code that is similar to yours, but with some minor changes to use the DataStax Python Cassandra driver (pip install cassandra-driver):
#imports for DataStax Cassandra driver and sys
from cassandra.cluster import Cluster
from cassandra.auth import PlainTextAuthProvider
from cassandra.cluster import SimpleStatement
import sys
#reading my hostname, username, and password from the command line; defining my Cassandra keyspace as as variable.
hostname=sys.argv[1]
username=sys.argv[2]
password=sys.argv[3]
keyspace="stackoverflow"
#adding my hostname to an array, setting up auth, and connecting to Cassandra
nodes = []
nodes.append(hostname)
auth_provider = PlainTextAuthProvider(username=username, password=password)
ssl_opts = {}
cluster = Cluster(nodes,auth_provider=auth_provider,ssl_options=ssl_opts)
session = cluster.connect(keyspace)
#setting my image name, loading the file, and reading the data
name = "IVoidWarranties.jpg"
fileHandle = open("/home/aploetz/Pictures/" + name)
imgData = fileHandle.read()
#preparing and executing my INSERT statement
strCQL = "INSERT INTO images (name,data) VALUES (?,?)"
pStatement = session.prepare(strCQL)
session.execute(pStatement,[name,imgData])
#closing my connection
session.shutdown()
Hope that helps!
I write a python code to create a table,but when I open DB browser for SQLite, it does not the table I have created, I am new to database, so can anyone tell me what is wrong with it ? Many thanks!
import sqlite3
conn = sqlite3.connect('test1.sqlite')
cur = conn.cursor()
cur.execute('''
DROP TABLE IF EXISTS Test''')
cur.execute('''
CREATE TABLE Test (azaz TEXT, count INTEGER)''')
cur.execute('''INSERT INTO Test (azaz, count)
VALUES ( 'aa', 1 )''' )
conn.commit()
conn.close()
image link:imgur.com/epfar.png
Your code is right and if you try:
import sqlite3
conn = sqlite3.connect('test1.sqlite')
row = conn.execute('SELECT * FROM Test').fetchone()
print("azaz=", row[0])
print("count=", row[1])
You will see this output:
('azaz=', u'aa')
('count=', 1)
So the table has been created and values has been inserted in the table.
I have just tested your code and it works flawlessly. I have used python-3.5 and DB Broswer for sqlite, tested on window 7 pro.