GET data using requests than insert into DB2 - python

Currently I am trying to retrieve JSON data from an API and store it in a database. I am able to retrieve the JSON data as a list, and I am able to connect to and query the DB2 Database. My issue is that I can not figure out how to generate an INSERT statement for the data retrieved from the API. The application is only for short term personal use, so SQL Injection attacks are not a concern. So overall I need to generate an sql insert statement from a list. My current code is below, with the api url and info changed.
import ibm_db
import requests
ibm_db_conn = ibm_db.connect("DATABASE=node1;HOSTNAME=100.100.100.100;PORT=50000;PROTOCOL=TCPIP;UID=username;PWD=password;", "", "")
api_request = requests.get("http://api-url/resource?api_key=123456",
auth=('user#api.com', 'password'))
api_code = api_request.status_code
api_data = api_request.json()
print(api_code)
print(api_data)

Depends on the format of the Json returned, and on what your table looks like. My first thought, though, is to use Python's json module:
import json
#...
#...
api_data = json.loads(api_request.json())
Now, you have a Python object you can access like normal:
api_data["key"][2]
for instance. You can itterate, slice, or do whatever else to extract the data you want. Say your json represented rows to be inserted:
query = "INSERT INTO <table> VALUES\n"
i = 0
for row in api_data:
query += "(%s)" %([i for i in row])
if i < len(api_data)-1: query += ",\n"
i += 1
Again, this will vary greatly depending on the format of your table and JSON, but that's the general idea I'd start with.

Related

DynamoDB FilterExpression with multiple condition python and boto3

Please I need help writing filter expressions for scanning data in dynamo db tables using python and boto3.
See my code below.
For some reason unknown to me, this search filter below which I am using is not giving me the right results
Please advice
dynamo_db = boto3.resource('dynamodb')
table = dynamo_db.Table(TABLE_NAME)
my_kwargs = {
'FilterExpression': Key('column_1').eq(val_type_1) and Key("column_2").eq("val_type_string")
}
response = table.scan(**my_kwargs)
items = response['Items']
table_item = items[0]
When you use Scan you do not filter on Key. You filter here is in an attribute so you will need to change Key to Attr.
Furthermore you will need to implement pagination if you are scanning more than 1Mb of data:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Scan.html

Real Time Firebase Query

I have Firebase DB with the below structure:
and the rules are:
I'm using a python script to save and read DB data. I would like to find the "d1" value.
For example, when using the below code it returns null:
ref = db.reference('Test')
query = ref.order_by_child('f1').equal_to('alal-55')
snapshot = query.get()
for key,val in snapshot.items():
print(val)
Any solution?
Regards,
This query ref.order_by_child('f1').equal_to('alal-55') will not work, because the structure of your database is very deep. You need to change the structure to be able to perform the queries for example:
Test
random_id
f1 : value
Using the above, you can use order_by_child

Python - parsing through a variable for desired data

this should be pretty simple.
I'm writing a program to pull data from a database to be stored in a variable and passed on to another program. I have it working to connect to the db and run the query to pull the data, which is returned in a new line for each column. I would like to parse through this output to store only the columns I need into separate variables to be imported by another python program. Please note that the print(outbound) part is just there for testing purposes.
Here's the function:
def pullData():
cnxn = pyodbc.connect('UID='+dbUser+';PWD='+dbPassword+';DSN='+dbHost)
cursor = cnxn.cursor()
#run query to pull the newest sms message
outbound = cursor.execute("SELECT * FROM WKM_SMS_outbound ORDER BY id DESC")
table = cursor.fetchone()
for outbound in table:
print(outbound)
#close connection
cnxn.close()
and here's the sample output from the query that I would like to parse through- as it's currently being stored in variable outbound. (NOTE) This is not 1 column. This is one ROW. Each new line is a new column in the db... this is just how it's being returned and formatted when I run the program.
I think this is the best way you can achieve this:
(Considering that your table variable returns as list)
# Lets say until here you've done your queries
collection = {}
for index,outbound in enumerate(table):
key_name = "key{0}".format(index)
collection[key_name] = outbound
print(collection)
OUTPUT Expected:
{
"key1" : 6932921,
"key2" : 303794,
...
...
...
}
And then what you can do to access it from another python file by importing the collection variable by adding return collection on your pulldata function
On the other python file it will be simple just to:
from your_file_name import collection # considering they are on the same directory

Python, Oracle DB, XML data in a column, fetching cx_Oracle.Object

I am using python to fetch data from Oracle DB. All the rows have a column which has XML data. When I print the data fetched from Oracle DB using python, the column with XML data is printed as - cx_Oracle.OBJECT object at 0x7fffe373b960 etc. I even converted the data to pandas data frame and still the data for this columns is printed as cx_Oracle.OBJECT object at 0x7fffe373b960. I want to access the key value data stored in this column(XML files).
Please read inline comments.
cursor = connection.cursor() # you know what it is for
# here getClobVal() returns whole xml. It won't work without alias I don't know why.
query = """select a.columnName.getClobVal() from tablename a"""
cursor.execute(query) #you know what it is for
result = cursor.fetchone()[0].read() # for single record
result = cursor.fetchall() # for all records
for res in result:
print res[0].read()

UnicodeDecodeError while executing query or in jsonpickle

I post an arbitrary query to the server side, where it is executed and the result set is sent back to the client. A typical query looks like this:
select Наименование from sys_Атрибут where Наименование = 'Район'
As you can see, it contains non-Latin literals. This query is not executed. However, if I write it like this
select Наименование AS attr from sys_Атрибут where Наименование = 'Район'
Then, it's ok. The server side code looks like this:
#--coding: utf-8
from __future__ import unicode_literals
...
import pyodbc # tried both of them
import pypyodbc #
def resultset(request):
query = request.POST['query']
query = u'{}'.format(query)
cnx = pyodbc.connect("DRIVER=FreeTDS;SERVER=192.168.0.1;PORT=1433;
DATABASE = mydatabase;UID=sa;PWD=password;TDS_Version=7.0;ClientCharset=UTF8;")
cursor = cnx.cursor()
cursor.execute(query.encode('utf-8'))
columns = [desc[0] for desc in cursor.description] # sometimes error happens at this point
data = []
output = []
for row in cursor:
data.append(dict(zip(columns, row)))
output = '{items:'
output += jsonpickle.encode(data) # sometimes at that point
output += '}'
return HttpResponse(output)
The whole problem is in the names of table fields. I guess, to solve this problem I should do this part of coding data.append(dict(zip(columns, row))) in a different manner.
To state the obvious, you shouldn't be sending raw queries down to the server. Second, using unicode_literals and u"" strings is strange. Third, using unicode string and then dumping out out as utf-8 is also strange. I'd suggest [reading up on encodings to start](http://kunststube.net/encoding/.
To solve the actual issue that's likely being presented, the fault is likely with the pyodbc library. What database are you connecting to and have you considered using a different driver? If the database supports the query you're trying to execute (select unicode from table where field = 'value') then it's likely the driver mucking it up.

Categories