passing parameters in neo4j using python - python

I want to pass parameter in CREATE using Python
e.g:
'''
n = "abc"
a = 1234
cqlCreate = "CREATE (cornell:university { name: $n,yob:$a})"
''''
but it dosen't work.. any suggestions please

It actually depends on the Python driver that you are using to connect to Neo4j (check https://neo4j.com/developer/python/ to see the list of available drivers). If you are using the official neo4j-driver, the code you wrote is correct. In order execute the Cypher query, you could do something like this:
from neo4j import GraphDatabase
uri = # insert neo4j uri
user = # insert neo4j username
password = # insert neo4j password
n = "abc"
a = 1234
query = "CREATE (cornell:university { name: $n,yob:$a})"
driver = GraphDatabase.driver(uri, auth=(user, password))
session = driver.session()
result = session.run(query, n=n, a=a)
session.close()
driver.close()
Although âńōŋŷXmoůŜ's answer will probably work, it is not recommended way to it.
See also:
https://neo4j.com/docs/api/python-driver/current/api.html

You can use the f-strings in Python. See example below. Note that
You need to use {{ as escape character for {
2 You need to use \ as escape character for "
n = "abc"
a = 1234
cqlCreate = f"CREATE (cornell:university {{name: \"{n}\", yob: {a}}})"
print (cqlCreate)
Result:
CREATE (cornell:university {name: "abc", yob: 1234})
reference: https://www.python.org/dev/peps/pep-0498/

Related

Problems querying with python to BigQuery (Python String Format)

I am trying to make a query to BigQuery in order to modify all the values of a row (in python). When I use a simple string to query, I have no problems. Nevertheless, when I introduce the string formatting the query does not work. As follows I'm presenting the same query, but diminishing the number of columns that I am modifying.
I already made the connection to BigQuery, by defining the Client, etc (and works properly).
I tried:
"UPDATE `riscos-dev.survey_test.data-test-bdrn` SET informaci_meteorol_gica = {inf}, risc = {ri} WHERE objectid = {obj_id}".format(inf = df.informaci_meteorol_gica[index], ri = df.risc[index], obj_id = df.objectid[index])
To specify the input values in format:
df.informaci_meteorol_gica[index] = 'Neu' , also a string for df.risc[index] and df.objectid[index] = 3
I am obtaining the following error message:
BadRequest: 400 Braced constructors are not supported at [1:77]
Instead of using format method of string, I propose you another approach with the f string formating in Python :
def build_query():
inf = "'test_inf'"
ri = "'test_ri'"
obj_id = "'test_obj_id'"
return f"UPDATE `riscos-dev.survey_test.data-test-bdrn` SET informaci_meteorol_gica = {inf}, risc = {ri} WHERE objectid = {obj_id}"
if __name__ == '__main__':
query = build_query()
print(query)
The result is :
UPDATE `riscos-dev.survey_test.data-test-bdrn` SET informaci_meteorol_gica = 'test_inf', risc = 'test_ri' WHERE objectid = 'test_obj_id'
I mocked the query params in my example with :
inf = "'test_inf'"
ri = "'test_ri'"
obj_id = "'test_obj_id'"

I have an Error with python flask cause of an API result (probably cause of my list) and my Database

I use flask, an api and SQLAlchemy with SQLite.
I begin in python and flask and i have problem with the list.
My application work, now i try a news functions.
I need to know if my json informations are in my db.
The function find_current_project_team() get information in the API.
def find_current_project_team():
headers = {"Authorization" : "bearer "+session['token_info']['access_token']}
user = requests.get("https://my.api.com/users/xxxx/", headers = headers)
user = user.json()
ids = [x['id'] for x in user]
return(ids)
I use ids = [x['id'] for x in user] (is the same that) :
ids = []
for x in user:
ids.append(x['id'])
To get ids information. Ids information are id in the api, and i need it.
I have this result :
[2766233, 2766237, 2766256]
I want to check the values ONE by One in my database.
If the values doesn't exist, i want to add it.
If one or all values exists, I want to check and return "impossible sorry, the ids already exists".
For that I write a new function:
def test():
test = find_current_project_team()
for find_team in test:
find_team_db = User.query.filter_by(
login=session['login'], project_session=test
).first()
I have absolutely no idea to how check values one by one.
If someone can help me, thanks you :)
Actually I have this error :
sqlalchemy.exc.InterfaceError: (InterfaceError) Error binding
parameter 1 - probably unsupported type. 'SELECT user.id AS user_id,
user.login AS user_login, user.project_session AS user_project_session
\nFROM user \nWHERE user.login = ? AND user.project_session = ?\n
LIMIT ? OFFSET ?' ('my_tab_login', [2766233, 2766237, 2766256], 1, 0)
It looks to me like you are passing the list directly into the database query:
def test():
test = find_current_project_team()
for find_team in test:
find_team_db = User.query.filter_by(login=session['login'], project_session=test).first()
Instead, you should pass in the ID only:
def test():
test = find_current_project_team()
for find_team in test:
find_team_db = User.query.filter_by(login=session['login'], project_session=find_team).first()
Asides that, I think you can do better with the naming conventions though:
def test():
project_teams = find_current_project_team()
for project_team in project_teams:
project_team_result = User.query.filter_by(login=session['login'], project_session=project_team).first()
All works thanks
My code :
project_teams = find_current_project_team()
for project_team in project_teams:
project_team_result = User.query.filter_by(project_session=project_team).first()
print(project_team_result)
if project_team_result is not None:
print("not none")
else:
project_team_result = User(login=session['login'], project_session=project_team)
db.session.add(project_team_result)
db.session.commit()

Fetching results from cypher bolt statement

I am trying to access neo4j using neo4j python driver.I am running the following code to get a property of a thing A., I open the driver and session directly from GraphDatabase of neo4j and use session.run() to execute graph queries. These queries return a BoltStatementResult object.My question is how this object can be converted to actual result that I need(Property of thing A).?
from neo4j import GraphDatabase
uri = "bolt://abc:7687"
driver = GraphDatabase.driver(uri, auth=("neo4j", "password"))
def matchQuestion(tx, intent,thing):
result = tx.run("MATCH (e:thing) WHERE e.name = {thing}"
"RETURN e.description", thing=thing)
print(result)
with driver.session() as session:
session.read_transaction(matchQuestion, "define","A")
result = tx.run("MATCH (e:thing) WHERE e.name = {thing}"
"RETURN e.description AS description", thing=thing)
for line in result:
print line["description"]
or
print result.single()
You could also specify the item position like -
print result.single()[0]

Write results to permanent table in bigquery

I am using named parameters in Bigquery SQL and want to write the results to a permanent table. I have two functions 1 for using named query parameters and 1 for writing query results to table. How do I combine the two to get query results written to table; the query having named parameters.
This is the function using parameterized queries :
def sync_query_named_params(column_name,min_word_count,value):
query = """with lsq_results as
(select "%s" = #min_word_count)
replace (%s AS %s)
from lsq.lsq_results
""" % (min_word_count,value,column_name)
client = bigquery.Client()
query_results = client.run_sync_query(query
,
query_parameters=(
bigquery.ScalarQueryParameter('column_name', 'STRING', column_name),
bigquery.ScalarQueryParameter(
'min_word_count',
'STRING',
min_word_count),
bigquery.ScalarQueryParameter('value','INT64',value)
))
query_results.use_legacy_sql = False
query_results.run()
Function to write to permanent table
class BigQueryClient(object):
def __init__(self, bq_service, project_id, swallow_results=True):
self.bigquery = bq_service
self.project_id = project_id
self.swallow_results = swallow_results
self.cache = {}
def write_to_table(
self,
query,
dataset=None,
table=None,
external_udf_uris=None,
allow_large_results=None,
use_query_cache=None,
priority=None,
create_disposition=None,
write_disposition=None,
use_legacy_sql=None,
maximum_billing_tier=None,
flatten=None):
configuration = {
"query": query,
}
if dataset and table:
configuration['destinationTable'] = {
"projectId": self.project_id,
"tableId": table,
"datasetId": dataset
}
if allow_large_results is not None:
configuration['allowLargeResults'] = allow_large_results
if flatten is not None:
configuration['flattenResults'] = flatten
if maximum_billing_tier is not None:
configuration['maximumBillingTier'] = maximum_billing_tier
if use_query_cache is not None:
configuration['useQueryCache'] = use_query_cache
if use_legacy_sql is not None:
configuration['useLegacySql'] = use_legacy_sql
if priority:
configuration['priority'] = priority
if create_disposition:
configuration['createDisposition'] = create_disposition
if write_disposition:
configuration['writeDisposition'] = write_disposition
if external_udf_uris:
configuration['userDefinedFunctionResources'] = \
[ {'resourceUri': u} for u in external_udf_uris ]
body = {
"configuration": {
'query': configuration
}
}
logger.info("Creating write to table job %s" % body)
job_resource = self._insert_job(body)
self._raise_insert_exception_if_error(job_resource)
return job_resource
How do I combine the 2 functions to write a parameterized query and write the results to a permanent table?Or if there is another simpler way. Please suggest.
You appear to be using two different client libraries.
Your first code sample uses a beta version of the BigQuery client library, but for the time being I would recommend against using it, since it needs substantial revision before it is considered generally available. (And if you do use it, I would recommend using run_async_query() to create a job using all available parameters, and then call results() to get the QueryResults object.)
Your second code sample is creating a job resource directly, which is a lower-level interface. When using this approach, you can specify the configuration.query.queryParameters field on your query configuration directly. This is the approach I'd recommend right now.

Using Python elastisearch_dsl with nested objects

I want to try and use elasticsearch_dsl with python for the following
import elasticsearch
es_server = 'my_server_name'
es_port = '9200'
es_index_name = 'my_index_name'
es_connection = Elasticsearch([{'host': es_server, 'port': es_port}])
es_query = '{"query":{"bool":{"must":[{"term":{"data.party.fullName":"john do"}}],"must_not":[],"should":[]}},"from":0,"size":1,"sort":[],"facets":{}}'
my_results = es_connection.search(index=es_index_name, body=es_query)
print my_results
es_query ='{"query": {"nested" : {"filter" : {"term" : {"party.phoneList.phoneFullNumber" : "4081234567"}},"path" : "party.phoneList"}},"from" :0,"size" : 1}';
my_results = es_connection.search(index=es_index_name, body=es_query)
print my_results
I am able to get the 1st query but am not sure on the second one
from elasticsearch import Elasticsearch
from elasticsearch_dsl import Search, Q
client = Elasticsearch('my_server:9200')
s = Search(using=client, index = "my_index").query("term",fullName="john do ")
response = s.execute()
print response
Not sure how to do the query using DSL for the nested object party.phoneList.phoneFullNumber
New to ES and hence could not figure out how to do the nested objects.
I looked at https://github.com/elastic/elasticsearch-dsl-py/issues/28 and could not quite figure out.
Thanks !
Just use __ instead of . to get around python's limitations and the nested query:
s = Search(using=client, index = "my_index")
s = s.query("nested",
path="party.phoneList",
query=Q("term", party__phoneList__phoneFullNumber="4081234567")
)

Categories