I have clouded the mongodb collection for my project in Amazon server.
I am new to mongodb queries. When I connect and look in 'robomongo' tool, I can see that there two databases A and B. I want to access the one collection named 'wl_c' under B in view function in django and convert to JSON data.
I do not know how to do even though I tried,
from pymongo import Connection
server = '000.00.000.00'
port = 00000
conn = Connection(server,port)
def mongo(request):
mdb = conn.events.polls_post.find({})
data = json.dumps(mdb)
return HttpResponse(data, mimetype="application/json")
got
Type error mdb is not json serializable
find({}) returns cursor. You need to get the items. Either cast to list or iterate over the result.
Something like:
mdb = conn.events.polls_post.find({})
mdb_list = list(mdb)
json.dumps(mdb_list)
Look here
Related
We have created a API which takes data from PowerBI and provides output in JSON format.
We have made some modifications to the original pyadomd code, and it runs without errors. However, it is not displaying the PowerBI data in JSON format as it should.
Original code: https://pypi.org/project/pyadomd/.
from sys import path
path.append('\\Program Files\\Microsoft.NET\\ADOMD.NET\\160')
from pyadomd import Pyadomd
from flask import Flask, jsonify
app = Flask(__name__)
#app.route('/Alpino')
def get_data():
conn_str = 'Provider=MSOLAP;User ID=Alexw#dettol.com;Data Source=powerbi://api.powerbi.com/v1.0/myorg/Power BI Model [Test];initial catalog=PBI_Model_20230121;Password=Alexw#2023;Persist Security Info=True;Impersonation Level=Impersonate;'
query = """EVALUATE Project"""
with Pyadomd(conn_str) as conn:
with conn.cursor().execute(query) as cur:
data = cur.fetchall()
print(data)
return jsonify(data)
if __name__ == '__main__':
app.run()
For better understanding of Pyadomd library, see also the link above.
Output:
No PowerBI Data are fetched & Return with 404 Error:
I think app.route is unable to fetch the file path.
When we have used default code it generates Authentication error & after modification now it is not showing output in JSON format. When we mention alpino in url file path it provides 404 error.
I have made changes into code & issue has been resolved. Please find below code.
Python Code :
from sys import path
path.append('\\Program Files\\Microsoft.NET\\ADOMD.NET\\160')
from pyadomd import Pyadomd
from flask import Flask, jsonify
# Create a Flask app
app = Flask(__name__)
# Define an API endpoint
#app.route('/alpino')
def alpino():
# Connect to Power BI and execute query
conn_str = 'Provider=MSOLAP;User ID=Alexw#dettol.com;Data Source=powerbi://api.powerbi.com/v1.0/myorg/Power BI Model [Test];initial catalog=PBI_Model_20230121;Password=Alexw#2023;Persist Security Info=True;Impersonation Level=Impersonate;'
query = 'EVALUATE ROW("ProjectRowCount",COUNTROWS(Project) )'
with Pyadomd(conn_str) as conn:
with conn.cursor().execute(query) as cur:
data = cur.fetchall()
column_names = [column[0] for column in cur.description]
# Convert query result to a list of dictionaries
result = [dict(zip(column_names, row)) for row in data]
# Convert the list of dictionaries to a JSON string
json_result = jsonify(result)
return json_result
if __name__ == '__main__':
app.run()
Output :
[Json Output]
: https://i.stack.imgur.com/YYbzy.png
Query Explanation :
This code defines a Flask API endpoint that connects to a Power BI data source and executes a query. Here's a step-by-step breakdown of the code:
The first two lines import the necessary libraries: sys.path and pyadomd for working with the Power BI data source, and flask for building the API endpoint.
from sys import path
path.append('\Program Files\Microsoft.NET\ADOMD.NET\160')
from pyadomd import Pyadomd
from flask import Flask, jsonify
The next line creates a Flask app instance with Flask(name).
app = Flask(name)
The #app.route('/alpino') decorator is used to define the API endpoint. In this case, the endpoint URL is http:///alpino.
#app.route('/alpino')
The def alpino(): function defines the behavior of the API endpoint. It first sets up a connection to the Power BI data source with the given connection string and then executes a query using the Pyadomd library.
def alpino():
The cur.fetchall() method retrieves all the data returned by the query.
with Pyadomd(conn_str) as conn:
with conn.cursor().execute(query) as cur:
data = cur.fetchall()
The column_names variable is set to the list of column names returned by cur.description.
column_names = [column[0] for column in cur.description]
The result variable is set to a list of dictionaries, where each dictionary represents a row in the query result. The zip() and dict() functions are used to map the column names to the row values.
result = [dict(zip(column_names, row)) for row in data]
The jsonify() function is used to convert the result variable to a JSON string.
json_result = jsonify(result)
Finally, the function returns the JSON response using return json_result.
return json_result
In summary, this code sets up a Flask API endpoint that connects to a Power BI data source and returns the result of a query in JSON format. It uses the pyadomd library to connect to the data source, and the flask library to define the API endpoint and return the JSON response.
#Stay Healthy #Stay Safe
Hope your query got resolved. If you have any query, Feel free to contact us.
|| Jay Hind Jay Bharat ||
We have data in a Snowflake cloud database that we would like to move into an Oracle database. As we would like to work toward refreshing the Oracle database regularly, I am trying to use SQLAlchemy to automate this.
I would like to do this using Core because my team is all experienced with SQL, but I am the only one with Python experience. I think it would be easier to tweak the data pulls if we just pass SQL strings. Plus the Snowflake db has some columns with JSON that seems easier to parse using direct SQL since I do not see JSON in the SnowflakeDialect.
I have established connections to both databases and am able to do select queries from both. I have also manually created the tables in our Oracle db so that the keys and datatypes match what I am pulling from Snowflake. When I try to insert, though, my Jupyter notebook just continuously says "Executing Cell" and hangs. Any thoughts on how to proceed or how to get the notebook to tell me where the hangup is?
from sqlalchemy import create_engine,pool,MetaData,text
from snowflake.sqlalchemy import URL
import pandas as pd
eng_sf = create_engine(URL( #engine for snowflake
account = 'account'
user = 'user'
password = 'password'
database = 'database'
schema = 'schema'
warehouse = 'warehouse'
role = 'role'
timezone = 'timezone'
))
eng_o = create_engine("oracle+cx_oracle://{}[{}]:{}#{}".format('user','proxy','password','database'),poolclass=pool.NullPool) #engine for oracle
meta_o = MetaData()
meta_o.reflect(bind=eng_o)
person_o = meta_o['bb_lms_person'] # other oracle tables follow this example
meta_sf = MetaData()
meta_sf.reflect(bind=eng_sf,only=['person']) # other snowflake tables as well, but for simplicity, let's look at one
person_sf = meta_sf.tables['person']
person_query = """
SELECT ID
,EMAIL
,STAGE:student_id::STRING as STUDENT_ID
,ROW_INSERTED_TIME
,ROW_UPDATED_TIME
,ROW_DELETED_TIME
FROM cdm_lms.PERSON
"""
with eng_sf.begin() as connection:
result = connection.execute(text(person_query)).fetchall() # this snippet runs and returns result as expected
with eng_o.begin() as connection:
connection.execute(person_o.insert(),result) # this is a coinflip, sometimes it runs, sometimes it just hangs 5ever
eng_sf.dispose()
eng_o.dispose()
I've checked the typical offenders. The keys for both person_o and the result are all lowercase and match. Any guidance would be appreciated.
use the metadata for the table. the fTable_Stage update or inserted as fluent functions and assign values to lambda variables. This is very safe because only metadata field variables can be used in the lambda. I am updating three fields:LateProbabilityDNN, Sentiment_Polarity, Sentiment_Subjectivity
engine = create_engine("mssql+pyodbc:///?odbc_connect=%s" % params)
connection=engine.connect()
metadata=MetaData()
Session = sessionmaker(bind = engine)
session = Session()
fTable_Stage=Table('fTable_Stage', metadata,autoload=True,autoload_with=engine)
stmt=fTable_Stage.update().where(fTable_Stage.c.KeyID==keyID).values(\
LateProbabilityDNN=round(float(late_proba),2),\
Sentiment_Polarity=round(my_valance.sentiment.polarity,2),\
Sentiment_Subjectivity= round(my_valance.sentiment.subjectivity,2)\
)
connection.execute(stmt)
I'm running a flask-rest-jsonapi application on top of Flask, sqlalchemy, and cx_Oracle. One requirement of this project is that the connection property client_identifier, made available via cx_Oracle (relevant documentation), be modifiable based on a value sent in a JWT with each client request. We need to be able to write to this property because our internal auditing tables make use of it to track changes made by individual users.
In PHP, setting this value is straightforward using oci8, and has worked great for us in the past.
However, I have been unable to figure out how to set the same property using this new application structure. In cx_Oracle, the client_identifier property is a 'write-only' property, so it's difficult to verify that the value is set correctly without going to the backend and examining the db session properties. You access this property via the sqlalchemy raw_connection object.
Beyond being difficult to read, setting the value has no effect. We get the desired client identifier value from the JWT passed in with each request and attempt to set it on the raw connection object. While the action of setting the value throws no error, the value does not show up on the backend for the relevant session, i.e. the client_identifier property is null when viewing sessions on the db side.
from flask import Flask, jsonify, request
from flask_sqlalchemy import SQLAlchemy
app = Flask(__name__)
db = SQLAlchemy(app)
# the client_identifier property is available at db.engine.raw_connection()
# attempt to set the client_identifier property
raw_conn = db.engine.raw_connection()
raw_conn.client_identifier = 'USER_210'
# execute some sql using raw_conn.cursor()...
# the client identifier value on the db side for this request is null
Is the approach shown above the correct way to set the client_identifier? If so, why isn't USER_210 listed in the client_identifier column when querying the backend session table using the v$session view?
In pure cx_Oracle, this works for me:
import cx_Oracle
db = cx_Oracle.connect("system", "oracle", "localhost/orclpdb")
db.client_identifier = 'this-is-me'
cursor = db.cursor()
cursor.execute("select username, client_identifier from v$session where username = 'SYSTEM'")
v = cursor.fetchone()
print(v)
The result is:
$ python3 q1.py
('SYSTEM', 'this-is-me')
I don't have the setup to test your exact scenario.
Is it possible to make SQLAlchemy do cross server joins?
If I try to run something like
engine = create_engine('mssql+pyodbc://SERVER/Database')
query = sql.text('SELECT TOP 10 * FROM [dbo].[Table]')
with engine.begin() as connection:
data = connection.execute(query).fetchall()
It works as I'd expect. If I change the query to select from [OtherServer].[OtherDatabase].[dbo].[Table] I get an error message "Login failed for user 'NT AUTHORITY\\ANONYMOUS LOGON"
Looks like there's an issue with how you authenticate to SQL server.
I believe you can connect using the current Windows user, the URI syntax is then mssql+pyodbc://SERVER/Database?trusted_connection=yes (I have never tested this, but give it a try).
Another option is to create a SQL server login (ie. a username/password that is defined within SQL server, NOT a Windows user) and use the SQL server login when you connect.
The database URI then becomes: mssql+pyodbc://username:password#SERVER/Database.
mssql+pyodbc://SERVER/Database?trusted_connection=yes threw an error when I tried to it. It did point me in the right direction though.
from sqlalchemy import create_engine, sql
import urllib
string = "DRIVER={SQL SERVER};SERVER=server;DATABASE=db;TRUSTED_CONNECTION=YES"
params = urllib.quote_plus(string)
engine = create_engine('mssql+pyodbc:///?odbc_connect={0}'.format(params))
query = sql.text('SELECT TOP 10 * FROM [CrossServer].[datbase].[dbo].[Table]')
with engine.begin() as connection:
data = connection.execute(query).fetchall()
It's quite complicated if you suppose to alter different servers through one connection.
But if you need to perform a query to a different server under different credentials you should add linked server first with sp_addlinkedserver. Then it should be added credentials to the linked server with sp_addlinkedsrvlogin. Have you tried this?
I'm creating an iOS client for App.net and I'm attempting to setup a push notification server. Currently my app can add a user's App.net account id (a string of numbers) and a APNS device token to a MySQL database on my server. It can also remove this data. I've adapted code from these two tutorials:
How To Write A Simple PHP/MySQL Web Service for an iOS App - raywenderlich.com
Apple Push Notification Services in iOS 6 Tutorial: Part 1/2 - raywenderlich.com
In addition, I've adapted this awesome python script to listen in to App.net's App Stream API.
My python is horrendous, as is my MySQL knowledge. What I'm trying to do is access the APNS device token for the accounts I need to notify. My database table has two fields/columns for each entry, one for user_id and a one for device_token. I'm not sure of the terminology, please let me know if I can clarify this.
I've been trying to use peewee to read from the database but I'm in way over my head. This is a test script with placeholder user_id:
import logging
from pprint import pprint
import peewee
from peewee import *
db = peewee.MySQLDatabase("...", host="localhost", user="...", passwd="...")
class MySQLModel(peewee.Model):
class Meta:
database = db
class Active_Users(MySQLModel):
user_id = peewee.CharField(primary_key=True)
device_token = peewee.CharField()
db.connect()
# This is the placeholder user_id
userID = '1234'
token = Active_Users.select().where(Active_Users.user_id == userID)
pprint(token)
This then prints out:
<class '__main__.User'> SELECT t1.`id`, t1.`user_id`, t1.`device_token` FROM `user` AS t1 WHERE (t1.`user_id` = %s) [u'1234']
If the code didn't make it clear, I'm trying to query the database for the row with the user_id of '1234' and I want to store the device_token of the same row (again, probably the wrong terminology) into a variable that I can use when I send the push notification later on in the script.
How do I correctly return the device_token? Also, would it be easier to forgo peewee and simply query the database using python-mysqldb? If that is the case, how would I go about doing that?
The call User.select().where(User.user_id == userID) returns a User object but you are assigning it to a variable called token as you're expecting just the device_token.
Your assignment should be this:
matching_users = Active_Users.select().where(Active_Users.user_id == userID) # returns an array of matching users even if there's just one
if matching_users is not None:
token = matching_users[0].device_token