I have two data sets in my JSON API. I am unable to insert both into SQL Server. The Iteration using for loop doesnt seem to pick up the second data. Can someone please help me understand how to fix this. this is new for me, so am not able to find out whats wrong since the coding is bit different from SQL
import urllib, json
import pyodbc
#read data from API
url = "http://nagiosdatagateway.vestas.net/esq/ITE1452552/logstash- 2018.12.16/2/desc"
response = urllib.urlopen(url)
data = json.loads(response.read())
#define db connection
cnxn = pyodbc.connect("Driver={SQL Server Native Client 11.0};"
"Server=DKCDCVDCP42\DPA;"
"Database=VPDC;"
"Trusted_Connection=yes;")
cursor = cnxn.cursor()
i = 0
j = len(data)
print j
for i in range(i,j-1):
# print data[1]["_source"]["utc_timestamp"]
print i
print data[i]["_source"]["nagios_comment"]
print data[i]["_source"]["nagios_author"]
cursor.execute("insert into vpdc.pa.Pythontable(nagios_comment,nagios_author) values (?,?)",(data[i]
["_source"]["nagios_comment"],data[i]["_source"]["nagios_author"] ))
i += 1
print i
cnxn.commit()
both these two sets of values should be in the SQL table for columns
Nagios_comment & Nagios_author
307262828 Alex Christopher Ramos
307160348 Alex Christopher Ramos
the issue had been resolved by correctly indenting the cursor.execute statement in the script as below. In my original script, there was no indentation done for this line.so it was called outside the loop
import urllib, json
import pyodbc
#read data from API
url = "http://nagiosdatagateway.vestas.net/esq/ITE1452552/logstash-2018.12.16/2/desc"
response = urllib.urlopen(url)
data = json.loads(response.read())
#define db connection
cnxn = pyodbc.connect("Driver={SQL Server Native Client 11.0};"
"Server=DKCDCVDCP42\DPA;"
"Database=VPDC;"
"Trusted_Connection=yes;")
cursor = cnxn.cursor()
i = 0
j = len(data)
print j
for i in range(0,2):
#print data[1]["_source"]["utc_timestamp"]
print data[i]["_source"]["nagios_comment"]
print data[i]["_source"]["nagios_author"]
cursor.execute("insert into vpdc.pa.Pythontable(nagios_comment,nagios_author)
values (?,?)",(data[i]["_source"]["nagios_comment"],data[i]["_source"]
["nagios_author"] ))
cnxn.commit()
Related
I scraped data from a website. I want to create a table in mysql to save data. I create table with this code in my database:
create table car (Model varchar(60), Mileage varchar(60), Price varchar(60))
I also have code to create this data from truecar.com. But I con not insert this data into my table with my code. Could you help me? I face with this error:"ProgrammingError:1064(42000): You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near ('$32,000')' at line 1"
import requests
from bs4 import BeautifulSoup
import mysql.connector
url='https://www.truecar.com/used-cars-for-sale/listings/'
r=requests.get(url)
soup=BeautifulSoup(r.text,'html.parser')
cards = soup.select('div.linkable.card.card-shadow.vehicle-card._1qd1muk')
data = []
for card in cards:
vehicleCardYearMakeModel = card.find("div", {"data-test" :
"vehicleCardYearMakeModel"}).text.replace('Sponsored', '')
vehicleMileage = card.find("div", {"data-test" : "vehicleMileage"}).text
vehiclePrice = card.find("div", {"data-test" : "vehicleCardPricingBlockPrice"}).text
data.append({'price':vehiclePrice,'miles':vehicleMileage,'models':vehicleCardYearMakeModel})
print(data)
cnx = mysql.connector.connect(user='root', password='',
host='127.0.0.1',
database='cars')
cursor = cnx.cursor()
for entry in data:
cursor.execute("INSERT INTO car(Model,Mileage,Price) VALUES(\'%s\',\'%s\,\'%s\')"%
(entry['models'],entry['miles'],entry['price']))
cnx.commit()
cnx.close()
You're missing the closing quote (') after the miles value:
cursor.execute("INSERT INTO car(Model,Mileage,Price) VALUES(\'%s\',\'%s\',\'%s\')"% (entry['models'],entry['miles'],entry['price']))
# Here -----------------------------------------------------------------^
Having said that, using placeholders will save you a lot of headaches:
cursor.execute("INSERT INTO car(Model,Mileage,Price) VALUES(%s,%s,%s)", (entry['models'],entry['miles'],entry['price']))
I have been looking since yesterday about the way I could convert the output of an SQL Query into a Pandas dataframe.
For example a code that does this :
data = select * from table
I've tried so many codes I've found on the internet but nothing seems to work.
Note that my database is stored in Azure DataBricks and I can only access the table using its URL.
Thank you so much !
Hope this would help you out. Both insertion & selection are in this code for reference.
def db_insert_user_level_info(table_name):
#Call Your DF Here , as an argument in the function or pass directly
df=df_parameter
params = urllib.parse.quote_plus("DRIVER={SQL Server};SERVER=DESKTOP-ITAJUJ2;DATABASE=githubAnalytics")
engine = create_engine("mssql+pyodbc:///?odbc_connect=%s" % params)
engine.connect()
table_row_count=select_row_count(table_name)
df_row_count=df.shape[0]
if table_row_count == df_row_count:
print("Data Cannot Be Inserted Because The Row Count is Same")
else:
df.to_sql(name=table_name,con=engine, index=False, if_exists='append')
print("********************************** DONE EXECTUTED SUCCESSFULLY ***************************************************")
def select_row_count(table_name):
cnxn = pyodbc.connect("Driver={SQL Server Native Client 11.0};"
"Server=DESKTOP-ITAJUJ2;"
"Database=githubAnalytics;"
"Trusted_Connection=yes;")
cur = cnxn.cursor()
try:
db_cmd = "SELECT count(*) FROM "+table_name
res = cur.execute(db_cmd)
# Do something with your result set, for example print out all the results:
for x in res:
return x[0]
except:
print("Table is not Available , Please Wait...")
Using sqlalchemy to connect to the database, and the built-in method read_sql_query from pandas to go straight to a DataFrame:
import pandas as pd
from sqlalchemy import create_engine
engine = create_engine(url)
connection = engine.connect()
query = "SELECT * FROM table"
df = pd.read_sql_query(query,connection)
I was trying to read some data from a text file and write it down in a Sql server table using Pandas Module and FOR LOOP. Below is my code..
import pandas as pd
import pyodbc
driver = '{SQL Server Native Client 11.0}'
conn = pyodbc.connect(
Trusted_Connection = 'Yes',
Driver = driver,
Server = '***********',
Database = 'Sullins_Data'
)
def createdata():
cursor = conn.cursor()
cursor.execute(
'insert into Sullins_Datasheet(Part_Number,Web_Link) values(?,?);',
(a,j))
conn.commit()
a = pd.read_csv('check9.txt',header=None, names=['Part_Number','Web_Links'] ) # 2 Columns, 8 rows
b = pd.DataFrame(a)
p_no = (b['Part_Number'])
w_link = (b['Web_Links'])
# print(p_no)
for i in p_no:
a = i
for l in w_link:
j = l
createdata()
As you can see from the code that I have created 2 variables a and j to hold the value of both the columns of the text file one by one and write it in the sql table.
But after running the code I have got only the last row value in the table out of 8 rows.
When I used createdate function inside w_link for loop, it write the duplicate value in the table.
Please suggest where I am doing wrong.
here is sample of how your code is working
a = 0
b = 0
ptr=['s','d','f','e']
pt=['a','b','c','d']
for i in ptr:
a=i
print(a,end='')
for j in pt:
b=j
print(b,end='')
def view_empdetails(): #this is my function: it works great
conn = mysql.connector.connect(host="localhost",user="root",passwd="#####",database="#DB")
cursor = conn.cursor() # this is database connection
viw = """select * from employees"""
cursor.execute(viw)
for emp_no,first_name,last_name,gender,DOB,street,city,state,zipcode,email,phone,hire_date in cursor.fetchall(): # fetch all data from employee table in DB
print('-'*50)
print(emp_no)
print(first_name)
print(last_name)
print(gender)
print(DOB)
print(street)
print(city)
print(state) # I need all these output be in a table or organize format
print(zipcode) #not only list of records
print(email)
print(phone)
print(hire_date)
print('-'*50)
conn.commit()
conn.close()
return menu2()
I need all records in one table|| codes bring data from Database as line by line without formatting> I need them in table
I'm not sure if you are familiar with the pandas library, but I believe it is helpful here. I have never used it with mysql, but I have used it with psycopg2 and pyodbc, so I think the basic idea should work:
data = pd.DataFrame(cur.fetchall(),columns = colnames)
creates a dataFrame (think python spreadsheet or python table) that uses the column names from the table you're querying.
Struggling to figure out why this isnt working. I don't get any errors but it will not write to the table.
import pyodbc
connprod = pyodbc.connect('DRIVER={SQL Server};SERVER=server;DATABASE=master;Trusted_Connection=yes')
cursorprod = connprod.cursor()
conndev = pyodbc.connect('DRIVER={SQL Server};SERVER=server;DATABASE=master;Trusted_Connection=yes')
cursordev = conndev.cursor()
connlocal=pyodbc.connect('DRIVER={SQL Server};SERVER=server;DATABASE=DBA;Trusted_Connection=yes')
cursorlocal = connlocal.cursor()
cursorprod.execute("SELECT Servername = ##servername ,Date = getdate() ,wait_type ,waiting_tasks_count ,wait_time_ms ,max_wait_time_ms ,signal_wait_time_ms FROM sys.dm_os_wait_stats GO")
rows = cursorprod.fetchall()
for row in rows:
cursorlocal.execute('insert into dba.dbo.dm_os_wait_stats values (?,?,?,?,?,?,?)', row)
cursorlocal.commit
If your example is accurate, you're not calling the commit method:
cursorlocal.commit()