I have created an ObjectType using Python Graphene however in the query object to return the data, I don't know what the return should be from the resolver.
My code is below:
class RunLog(ObjectType):
status = String()
result = String()
log = List(String)
def resolve_status(self, resolve, run_id):
return r.hget("run:%i" % run_id, "status").decode('utf-8')
def resolve_result(self, resolve, run_id):
return r.hget("run:%i" % run_id, "result").decode('utf-8')
def resolve_log(self, resolve, run_id):
log_data = r.lrange("run:%i:log" % run_id, 0, -1)
log_data = [entry.decode('utf-8') for entry in log_data]
return log_data
class Query(ObjectType):
log_by_run_id = Field(RunLog, run_id=Int(required=True))
def resolve_log_by_run_id(root, info, run_id):
return ???
The RunLog object should read from a redis database and return the data at the relevant run_id.
I want to be able to execute the following query to get the data associated with that run:
{
logByRunId(runId: 1) {
status
result
log
}
}
What should the return be from 'resolve_log_by_run_id'? The Graphene documentation is not helpful.
Try returning a RunLog object, i.e.
def resolve_log_by_run_id(root, info, run_id):
# fetch values from Redis here using run_id (status, result, log)
run_log = RunLog(status=status, result=result, log=log)
return run_log
Also, since the arguments passed to your method are named, you should consider renaming your method something less restrictive like resolve_log or resolve_run_log. If you need to add another filter to your resolver, you won't need to add another method.
Related
I'd like to have two path operations with the same path #app.get("/movies/", ..):
#app.get("/movies/", response_model=schemas.Movie)
def read_movie_by_title(title: str, db: Session = Depends(get_db)):
db_movie = crud.get_movie_by_title(db, title=title)
return db_movie
#app.get("/movies/", response_model=List[schemas.Movie])
def read_movies(skip: int = 0, limit: int = 100, db: Session = Depends(get_db)):
db_movies = crud.get_movies(db, skip=skip, limit=limit)
return db_movies
As You can see, the first one is for getting a movie by its title and has a query parameter (title) and the second one is for getting a list. When I check the generated docs, the one for read_movie_by_title is missing.
I tried to solve this by changing the path for read_movie_by_title to /movies by removing /, but I don't like this solution at all.
So the question is: Is there a way to have two equal paths, but one with query parameters, or do I need to do this in a different way? Any suggestions?
You could use Path Parameters instead of Query parameters for the first one, by changing the endpoint to "/movies/{title}":
#app.get("/movies/{title}", response_model=schemas.Movie)
def read_movie_by_title(title: str, db: Session = Depends(get_db)):
db_movie = crud.get_movie_by_title(db, title=title)
return db_movie
# 2nd remains the same
#app.get("/movies/", response_model=List[schemas.Movie])
def read_movies(skip: int = 0, limit: int = 100, db: Session = Depends(get_db)):
db_movies = crud.get_movies(db, skip=skip, limit=limit)
return db_movies
The answer to your question (Is there a way to have two equal paths, but one with query parameters) is no, the only way to differentiate a path is by its method (GET, PUT, POST, DELETE, ...).
You do however have multiple ways of achieving your desired result.
REST approach
The standard REST approach to this would be to define a different path for the title based retrieval, such as /movies/{title}, like in the first example shown in the documentation for path parameters # FastAPI.
If you want to implement a search feature instead of a direct retrieval by unique title method, you can do something like this:
#app.get("/movies", response_model=List[schemas.Movie])
def read_movies(title: Optional[str] = None, skip: int = 0, limit: int = 100, db: Session = Depends(get_db)):
# list of filters
filters = []
if title: # if it's not none
filters.append(models.Movie.title.ilike(f"%{title}%"))
db_movies = crud.get_movies(db, filters=filters, skip=skip, limit=limit)
return db_movies
and within your crud operation you would query it something like:
return session.query(models.Movie).filter(*filters).all()
Simpler approach that does what you want with one method
#app.get("/movies", response_model=List[schemas.Movie])
def read_movies(title: Optional[str] = None, skip: int = 0, limit: int = 100, db: Session = Depends(get_db)):
if title: # if it's not none
return crud.get_movie_by_title(db, title=title)
return crud.get_movies(db, filters=filters, skip=skip, limit=limit)
Though the latter might confuse the consumer of your API, hence the REST approach will be the most recommended one.
I want to extract the line 'Unique protein chains: 1' from this entry, using a graphQL query.
I know this is the query I want to use:
{
entry(entry_id: "5O6C") {
rcsb_entry_info {
polymer_entity_count_protein
}
}
}
and I can see the output if I use the graphQL interface here:
{
"data": {
"entry": {
"rcsb_entry_info": {
"polymer_entity_count_protein": 1
}
}
}
}
Has the information I want : "polymer_entity_count_protein": 1
I want to run this query through python so it can be fed into other pipelines (and also process multiple IDs).
I found graphene to be one library that will do graphQL queries, and this is the hello world example, which I can get to work on my machine:
import graphene
class Query(graphene.ObjectType):
hello = graphene.String(name=graphene.String(default_value="world"))
def resolve_hello(self, info, name):
return name
schema = graphene.Schema(query=Query)
result = schema.execute('{ hello }')
print(result.data['hello']) # "Hello World"
I don't understand how to combine the two. Can someone show me how I edit my python code with the query of interest, so what's printed at the end is:
'506C 1'
I have seen some other examples/queries about graphene/graphQL: e.g. here; except I can't understand how to make my specific example work.
Based on answer below, I ran:
import graphene
class Query(graphene.ObjectType):
# ResponseType needs to be the type of your response
# the following line defines the return value of your query (ResponseType)
# and the inputType (graphene.String())
entry = graphene.String(entry_id=graphene.String(default_value=''))
def resolve_entry(self, info, **kwargs):
id = kwargs.get('entry_id')
# as you already have a working query you should enter the logic here
schema = graphene.Schema(query=Query)
# not totally sure if the query needs to look like this, it also depends heavily on your response type
query = '{ entry(entry_id="506C"){rcsb_entry_info}'
result = schema.execute(query)
print("506C" + str(result.data.entry.rcsb_entry_info.polymer_entity_count_protein))
However, I get:
Traceback (most recent call last):
File "graphene_query_for_rcsb.py", line 18, in <module>
print("506C" + str(result.data.entry.rcsb_entry_info.polymer_entity_count_protein))
AttributeError: 'NoneType' object has no attribute 'entry'
Did you write the logic of the already working query you have in your question? Is it not using python/ graphene?
I'm not sure if I understood the question correctly but here's a general idea:
import graphene
class Query(graphene.ObjectType):
# ResponseType needs to be the type of your response
# the following line defines the return value of your query (ResponseType)
# and the inputType (graphene.String())
entry = graphene.Field(ResponseType, entry_id=graphene.String()
def resolve_entry(self, info, **kwargs):
id = kwargs.get('entry_id')
# as you already have a working query you should enter the logic here
schema = graphene.Schema(query=Query)
# not totally sure if the query needs to look like this, it also depends heavily on your response type
query = '{ entry(entry_id="506C"){rcsb_entry_info}}'
result = schema.execute(query)
print("506C" + str(result.data.entry.rcsb_entry_info.polymer_entity_count_protein)
Here an example for a response type:
if you have the query
# here TeamType is my ResponseType
get_team = graphene.Field(TeamType, id=graphene.Int())
def resolve_get_team(self, info, **kwargs):
id = kwargs.get('id')
if id is not None:
return Team.objects.get(pk=id)
else:
raise Exception();
the responseType is defined as:
class TeamType(DjangoObjectType):
class Meta:
model = Team
but you can also define a response type that is not based on a model:
class DeleteResponse(graphene.ObjectType):
numberOfDeletedObject = graphene.Int(required=True)
numberOfDeletedTeams = graphene.Int(required=False)
And your response type should look something like this:
class myResponse(graphene.ObjectType):
rcsb_entry_info = graphne.Field(Polymer)
class Polymer(graphene.ObjectType):
polymer_entity_count_protein = graphene.Int()
again this is not testet or anything and I don't really know what your response really is.
I use flask, an api and SQLAlchemy with SQLite.
I begin in python and flask and i have problem with the list.
My application work, now i try a news functions.
I need to know if my json informations are in my db.
The function find_current_project_team() get information in the API.
def find_current_project_team():
headers = {"Authorization" : "bearer "+session['token_info']['access_token']}
user = requests.get("https://my.api.com/users/xxxx/", headers = headers)
user = user.json()
ids = [x['id'] for x in user]
return(ids)
I use ids = [x['id'] for x in user] (is the same that) :
ids = []
for x in user:
ids.append(x['id'])
To get ids information. Ids information are id in the api, and i need it.
I have this result :
[2766233, 2766237, 2766256]
I want to check the values ONE by One in my database.
If the values doesn't exist, i want to add it.
If one or all values exists, I want to check and return "impossible sorry, the ids already exists".
For that I write a new function:
def test():
test = find_current_project_team()
for find_team in test:
find_team_db = User.query.filter_by(
login=session['login'], project_session=test
).first()
I have absolutely no idea to how check values one by one.
If someone can help me, thanks you :)
Actually I have this error :
sqlalchemy.exc.InterfaceError: (InterfaceError) Error binding
parameter 1 - probably unsupported type. 'SELECT user.id AS user_id,
user.login AS user_login, user.project_session AS user_project_session
\nFROM user \nWHERE user.login = ? AND user.project_session = ?\n
LIMIT ? OFFSET ?' ('my_tab_login', [2766233, 2766237, 2766256], 1, 0)
It looks to me like you are passing the list directly into the database query:
def test():
test = find_current_project_team()
for find_team in test:
find_team_db = User.query.filter_by(login=session['login'], project_session=test).first()
Instead, you should pass in the ID only:
def test():
test = find_current_project_team()
for find_team in test:
find_team_db = User.query.filter_by(login=session['login'], project_session=find_team).first()
Asides that, I think you can do better with the naming conventions though:
def test():
project_teams = find_current_project_team()
for project_team in project_teams:
project_team_result = User.query.filter_by(login=session['login'], project_session=project_team).first()
All works thanks
My code :
project_teams = find_current_project_team()
for project_team in project_teams:
project_team_result = User.query.filter_by(project_session=project_team).first()
print(project_team_result)
if project_team_result is not None:
print("not none")
else:
project_team_result = User(login=session['login'], project_session=project_team)
db.session.add(project_team_result)
db.session.commit()
I made a custom airflow operator, this operator takes an input and the output of this operator is on XCOM.
What I want to achieve is to call the operator with some defined input, parse the output as Python callable inside the Branch Operator and then pass the parsed output to another task that calls the same operator tree:
CustomOperator_Task1 = CustomOperator(
data={
'type': 'custom',
'date': '2017-11-12'
},
task_id='CustomOperator_Task1',
dag=dag)
data = {}
def checkOutput(**kwargs):
result = kwargs['ti'].xcom_pull(task_ids='CustomOperator_Task1')
if result.success = True:
data = result.data
return "CustomOperator_Task2"
return "Failure"
BranchOperator_Task = BranchPythonOperator(
task_id='BranchOperator_Task ',
dag=dag,
python_callable=checkOutput,
provide_context=True,
trigger_rule="all_done")
CustomOperator_Task2 = CustomOperator(
data= data,
task_id='CustomOperator_Task2',
dag=dag)
CustomOperator_Task1 >> BranchOperator_Task >> CustomOperator_Task2
In task CustomOperator_Task2 I would want to pass the parsed data from BranchOperator_Task. Right now it is always empty {}
What is the best way to do that?
I see your issue now. Setting the data variable like you are won't work because of how Airflow works. An entirely different process will be running the next task, so it won't have the context of what data was set to.
Instead, BranchOperator_Task has to push the parsed output into another XCom so CustomOperator_Task2 can explicitly fetch it.
def checkOutput(**kwargs):
ti = kwargs['ti']
result = ti.xcom_pull(task_ids='CustomOperator_Task1')
if result.success:
ti.xcom_push(key='data', value=data)
return "CustomOperator_Task2"
return "Failure"
BranchOperator_Task = BranchPythonOperator(
...)
CustomOperator_Task2 = CustomOperator(
data_xcom_task_id=BranchOperator_Task.task_id,
data_xcom_key='data',
task_id='CustomOperator_Task2',
dag=dag)
Then your operator might look something like this.
class CustomOperator(BaseOperator):
#apply_defaults
def __init__(self, data_xcom_task_id, data_xcom_key, *args, **kwargs):
self.data_xcom_task_id = data_xcom_task_id
self.data_xcom_key = data_xcom_key
def execute(self, context):
data = context['ti'].xcom_pull(task_ids=self.data_xcom_task_id, key=self.data_xcom_key)
...
Parameters may not be required if you just want to hardcode them. It depends on your use case.
As your comment suggests, the return value from your custom operator is None, therefore your xcom_pull should expect to be empty.
Please use xcom_push explicitly, as the default behavior of airflow could change over time.
Playing with new Google App Engine MapReduce library filters for input_reader I would like to know how can I filter by ndb.Key.
I read this post and I've played with datetime, string, int, float, in filters tuples, but How I can filter by ndb.Key?
When I try to filter by a ndb.Key I get this error:
BadReaderParamsError: Expected Key, got u"Key('Clients', 406)"
Or this error:
TypeError: Key('Clients', 406) is not JSON serializable
I tried to pass a ndb.Key object and string representation of the ndb.Key.
Here are my two filters tuples:
Sample 1:
input_reader': {
'input_reader': 'mapreduce.input_readers.DatastoreInputReader',
'entity_kind': 'model.Sales',
'filters': [("client","=", ndb.Key('Clients', 406))]
}
Sample 2:
input_reader': {
'input_reader': 'mapreduce.input_readers.DatastoreInputReader',
'entity_kind': 'model.Sales',
'filters': [("client","=", "%s" % ndb.Key('Clients', 406))]
}
This is a bit tricky.
If you look at the code on Google Code you can see that mapreduce.model defines a JSON_DEFAULTS dict which determines the classes that get special-case handling in JSON serialization/deserialization: by default, just datetime. So, you can monkey-patch the ndb.Key class into there, and provide it with functions to do that serialization/deserialization - something like:
from mapreduce import model
def _JsonEncodeKey(o):
"""Json encode an ndb.Key object."""
return {'key_string': o.urlsafe()}
def _JsonDecodeKey(d):
"""Json decode a ndb.Key object."""
return ndb.Key(urlsafe=d['key_string'])
model.JSON_DEFAULTS[ndb.Key] = (_JsonEncodeKey, _JsonDecodeKey)
model._TYPE_IDS['Key'] = ndb.Key
You may also need to repeat those last two lines to patch mapreduce.lib.pipeline.util as well.
Also note if you do this, you'll need to ensure that this gets run on any instance that runs any part of a mapreduce: the easiest way to do this is to write a wrapper script that imports the above registration code, as well as mapreduce.main.APP, and override the mapreduce URL in your app.yaml to point to your wrapper.
Make your own input reader based on DatastoreInputReader, which knows how to decode key-based filters:
class DatastoreKeyInputReader(input_readers.DatastoreKeyInputReader):
"""Augment the base input reader to accommodate ReferenceProperty filters"""
def __init__(self, *args, **kwargs):
try:
filters = kwargs['filters']
decoded = []
for f in filters:
value = f[2]
if isinstance(value, list):
value = db.Key.from_path(*value)
decoded.append((f[0], f[1], value))
kwargs['filters'] = decoded
except KeyError:
pass
super(DatastoreKeyInputReader, self).__init__(*args, **kwargs)
Run this function on your filters before passing them in as options:
def encode_filters(filters):
if filters is not None:
encoded = []
for f in filters:
value = f[2]
if isinstance(value, db.Model):
value = value.key()
if isinstance(value, db.Key):
value = value.to_path()
entry = (f[0], f[1], value)
encoded.append(entry)
filters = encoded
return filters
Are you aware of the to_old_key() and from_old_key() methods?
I had the same problem and came up with a workaround with computed properties.
You can add to your Sales model a new ndb.ComputedProperty with the Key id. Ids are just strings, so you wont have any JSON problems.
client_id = ndb.ComputedProperty(lambda self: self.client.id())
And then add that condition to your mapreduce query filters
input_reader': {
'input_reader': 'mapreduce.input_readers.DatastoreInputReader',
'entity_kind': 'model.Sales',
'filters': [("client_id","=", '406']
}
The only drawback is that Computed properties are not indexed and stored until you call the put() parameter, so you will have to traverse all the Sales entities and save them:
for sale in Sales.query().fetch():
sale.put()