Standard MongoDB driver for Python PyMongo has a method find_and_modify, but async client Motor doesn't have something like this. There are some suggestions in the documentation about findAndModify command but there is no an example how to use it.
How can I use findAndModify in Motor?
I haven't found any better solution than this:
res = await db.command(
'findAndModify',
collection_name,
query={'status': 'initial'},
update={
'$set': {'status': 'in_progress'}
}
)
if not res['ok']:
raise DbError(f'Error when findAndModify: {res}')
if doc := res['value']:
td = TaskData(
task_id=doc['task_id'],
status=doc['status'],
)
Related
I am querying my data in Athena from lambda using Boto3.
My result is json format.
when I run my lambda function I get the whole record.
Now how can I paginate this data.
I only want to get fewer data per page and
send that small dataset to the UI to display.
Here is my Python code:
def lambda_handler(event, context):
athena = boto3.client('athena')
s3 = boto3.client('s3')
query = event['query']
# Execution
query_id = athena.start_query_execution(
QueryString=query,
QueryExecutionContext={'Database': DATABASE},
ResultConfiguration = {'OutputLocation': output}
)['QueryExecutionId']
I use postman to pass my query to get data and
I am aware of the SQl query LIMIT and OFFSET
but want to know if there is any other better way to pass LIMIT and OFFSET parameter in my function.
Please help me in this case.
Thanks.
A quick google search and found this answer in the Athena docs, which seems to be promising. Example from the docs
response_iterator = paginator.paginate(
QueryExecutionId='string',
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
})
I hope this helps!
Currently writing my first bot using pyTelegramBotAPI. I want to disable link previews on certain messages. How do I do this?
It looks like there is an disable_web_page_preview parameter on the sendMessage method.
tb = telebot.TeleBot(TOKEN)
tb.send_message(123456, "Hi <link>", disable_web_page_preview=True)
Original code;
def send_message(token, chat_id, text, disable_web_page_preview=None, reply_to_message_id=None, reply_markup=None,
parse_mode=None, disable_notification=None):
Try using link_preview/disable_web_page_preview parameter.
client.send_message('chat_name', '[link](example.com)', parse_mode = "Markdown", link_preview=False)
To me this works:
client.send_message(chat_name, msg, link_preview=False)
(Python 3.8, Telethon 1.24)
I am using Twisted, and I would like the single deferred operation to return an indicator if it succeeded or not just like when using the DeferredList.
This works with multiple deferrds:
my_query = deferToThread(self.mongo_pool.db[self.collection_name].find_one,
{
some_query
}
)
(my_success_1, my_data_1), (my_success_2, my_data_2) =
await DeferredList([ensureDeferred(my_query_1), ensureDeferred(my_query_2)])
But doing that with just one deferred returns the data directly:
my_return = await ensureDeferred(my_query)
When I wrap it under a Deferred, the application just hangs and doesn't respond:
my_return = await Deferred(ensureDeferred(my_query))
So, what I end up doing to get that indicator is the following and it works, but it definitely looks wrong:
my_return = await DeferredList([ensureDeferred(my_query)])
(my_success_indicator, my_data) = my_return[0]
Is there a better way of doing it? I am on Twisted version 17.05
I'm trying to get working two basic lambdas using Python2.7 runtime for SQS message processing. One lambda reads from SQS invokes and passes data to another lambda via context. I'm able to invoke the other lambda but the user context is empty in it. This is my code of SQS reader lambda:
import boto3
import base64
import json
import logging
messageDict = {'queue_url': 'queue_url',
'receipt_handle': 'receipt_handle',
'body': 'messageBody'}
ctx = {
'custom': messageDict,
'client': 'SQS_READER_LAMBDA',
'env': {'test': 'test'},
}
payload = json.dumps(ctx)
payloadBase64 = base64.b64encode(payload)
client = boto3.client('lambda')
client.invoke(
FunctionName='LambdaWorker',
InvocationType='Event',
LogType='None',
ClientContext=payloadBase64,
Payload=payload
)
And this is how I'm trying to inspect and print the contents of context variable inside invoked lambda, so I could check logs in CloudWatch:
memberList = inspect.getmembers(context)
for a in memberList:
logging.error(a)
The problem is nothing works and CloudWatch shows user context is empty:
('client_context', None)
I've tried example1, example2, example3, example4
Any ideas?
I gave up trying to pass the data through the context. However, I was able to pass the data through the Payload param:
client.invoke(
FunctionName='LambdaWorker',
InvocationType='Event',
LogType='None',
Payload=json.dumps(payload)
)
And then to read it from event parameter inside invoked lambda:
ctx = json.dumps(event)
The code in the question is very close. The only issue is the InvocationType type:
This will work with the code in your question:
client.invoke(
FunctionName='LambdaWorker',
InvocationType='RequestResponse',
LogType='None',
ClientContext=payloadBase64
)
However this changes the invocation to synchronous which may be undesirable. The reason for this behavior is not clear.
I am getting started with the BigQuery API in Python, following the documentation.
This is my code, adapted from an example:
credentials = GoogleCredentials.get_application_default()
bigquery_service = build('bigquery', 'v2', credentials=credentials)
try:
query_request = bigquery_service.jobs()
query_data = {
'query': (
'SELECT * FROM [mytable] LIMIT 10;"
)
}
query_response = query_request.query(
projectId=project_id,
body=query_data).execute()
for row in query_response['rows']:
print('\t'.join(field['v'] for field in row['f']))
The problem I'm having is that I keep getting the response:
{u'kind': u'bigquery#queryResponse',
u'jobComplete': False,
u'jobReference': {u'projectId': 'myproject', u'jobId': u'xxxx'}}
So it has no rows field. Looking at the docs, I guess I need to take the jobId field and use it to check when the job is complete, and then get the data.
The problem I'm having is that the docs are a bit scattered and confusing, and I don't know how to do this.
I think I need to use this method to check the status of the job, but how do I adapt it for Python? And how often should I check / how long should I wait?
Could anyone give me an example?
There is code to do what you want here.
If you want more background on what it is doing, check out Google BigQuery Analytics chapter 7 (the relevant snippet is available here.)
TL;DR:
Your initial jobs.query() call is returning before the query completes; to wait for the job to be done you'll need to poll on jobs.getQueryResults(). You can then page through the results of that call.