I am a very beginner writing one of my first webapps. I'm using FastAPI and I'm stuck on the logic of creating an endpoint that has to do a lot of things before it returns something back to the user. Since I'm new I also lack a lot of the vocabulary that I think I'd have if I were more experienced so bear with me:
To start -- What is my web app doing?
Well, I am trying to create something that will pull a schedule from a 3rd party API and then show it to me, allowing me to book things from my webapp rather than having to navigate to the 3rd party APIs. Some of these events are recurring (or are assumed to be) and get recorded as a "preference"
My approach so far, using 'MVVM'
User navigates to www.myfakeurl.com/schedule (either directly or from the home page) (handled by views/schedule.py)
This sends a request to the /schedule router (also handled by views/schedule.py). The request is forwarded to view_models/schedule.py
In the ScheduleViewModel in view_models/schedule.py, there's no payload to check, but I do need to request the schedule from the 3rd party API. I decoupled this step from the constructor on purpose, one must call get_schedule.
Once get_schedule is called, the user id and token are sent to services/3p_schedule.py to function called get_current_schedule, this is an asynchronous function that requests the schedule from the 3rd party API
Back in view_models.schedule.ScheduleViewModel, I validate the response and do some json reorganizing. The 3rd party API sends over a lot of stuff I don't need, so I just keep a few relevant pieces of info (like booking_id, if I already am signed up for a class, etc)
Still in view_models.schedule.ScheduleViewModel, I quickly check against the models.preferences.UserPreferences table (in the SQLAlchemy DB) if any classes in the schedule match my preferences and create a is_a_preferred_time attribute for each item in the list generated in (4) above
Still in the ScheduleViewModel, I finally combine the dict of info from step (4) with my info about user preferences from step (5). The schedule is a Dict[str, List[dict]]] where the key is the date and values are a list of dicts containing time (from 3rd party API), is_preferred (from UserPreferences table), booking_id (from 3rd party API), currently_enrolled (from 3rd party API). This is returned the views/schedule.py and the template is generated
(Somewhat) pseudocode below:
# views/schedule.py
from fastAPI import APIRouter
from fastapi.templating import Jinja2Templates
from startlette.requests import Request
from pydantic_schema import Schedule
from view_models.schedule import ScheduleViewModel
router = APIRouter()
templates = Jinja2Templates(directory="templates")
#router.get('/schedule')
async def schedule(request: Request) -> Schedule:
schedule_model = ScheduleViewModel(request)
schedule = await schedule_model.get_schedule(schedule_model.user)
return templates.TemplateResponse("schedule.html", {"schedule": schedule})
# views_models/schedule.py
from starlette.requests import Requests
from sqlalchemy.orm import Session
from dependencies.db import get_db
from pydantic_schema.user import User
from services.3p_services import get_3p_schedule
class ScheduleViewModel:
def __init__(self, request: Requests):
self.user = get_current_user(request)
self.db = self.get_db()
async def get_schedule(self, user: User):
# try to request the schedule
3p_schedule = await get_3p_schedule(id=user.id, token=user.token)
# parse schedule
relevant_schedule_info = self.parse_schedule(3p_schedule)
# combine with user preferences
user_schedule = await self.update_sched_with_prefs(3p_schedule)
return user_schedule
def parse_schedule(self, schedule):
# does stuff
return {"yyyy-mm-dd" : [{"booking_id": 1, "time": "x:xx xm", "currently_enrolled": False}]} # returns a dict of length N with this format
async def update_sched_with_prefs(self, schedule):
# gonna ignore the exact implementation here for brevity
return [{"yyyy-mm-dd" : [{"booking_id": 1, "time": "x:xx xm", "currently_enrolled": False, "is_preference": True}]} # dict of len(N) with values that are lists of variable length
I use the dict to populate a template.
My concerns and confusions
I started off using pydantic for a lot of this, and then I learned about MVVM and got confused and just went with MVVM since I don't know what I am doing and it seemed more clear.
But, this feels like a lot for one endpoint to handle ? Or maybe the examples in all my tutorials are just very basic and real-life is closer to what I have going on?
I apologize in advance for just how clueless I am, but I don't have anyone to ask about this and am looking for any and all guidance / resources. I think I am in a very steep section of the learning curve atm :)
Related
I'm trying to implement a service that will get a request from an external API, do some work (which might take time) and then return a response to the external API with the parsed data. However I'm at a loss on how to achieve this. I'm using FastAPI as my API service and have been looking at the following documentation: OpenAPI Callbacks
By following that documentation I can get the OpenAPI docs looking all pretty and nice. However I'm stumped on how to implement the actual callback and the docs don't have much information about that.
My current implementation is as follows:
from typing import Union
from fastapi import APIRouter, FastAPI
from pydantic import BaseModel, AnyHttpUrl
import requests
import time
from threading import Thread
app = FastAPI()
class Invoice(BaseModel):
id: str
title: Union[str, None] = None
customer: str
total: float
class InvoiceEvent(BaseModel):
description: str
paid: bool
class InvoiceEventReceived(BaseModel):
ok: bool
invoices_callback_router = APIRouter()
#invoices_callback_router.post(
"{$callback_url}/invoices/{$request.body.id}", response_model=InvoiceEventReceived
)
def invoice_notification(body: InvoiceEvent):
pass
#app.post("/invoices/", callbacks=invoices_callback_router.routes)
async def create_invoice(invoice: Invoice, callback_url: Union[AnyHttpUrl, None] = None):
# Send the invoice, collect the money, send the notification (the callback)
thread = Thread(target=do_invoice(invoice, callback_url))
thread.start()
return {"msg": "Invoice received"}
def do_invoice(invoice: Invoice, callback_url: AnyHttpUrl):
time.sleep(10)
url = callback_url + "/invoices/" + invoice.id
json = {
"data": ["Payment celebration"],
}
requests.post(url=url, json=json)
I thought putting the actual callback in a separate thread might work and that the {"msg": "Invoice received"} would be returned immediately and then 10s later the external api would recieve the result from the do_invoice function. But this doesn't seem to be the case so perhaps I'm doing something wrong.
I've also tried putting the logic in the invoice_notification function but that doesn't seem to do anything at all.
So what is the correct to implement a callback like the one I want? Thankful for any help!
I thought putting the actual callback in a separate thread might work and that the {"msg": "Invoice received"} would be returned
immediately and then 10s later the external api would recieve the
result from the do_invoice function. But this doesn't seem to be the case so perhaps I'm doing something wrong.
If you would like to run a task after the response has been sent, you could use a BackgroundTask, as demonstrated in this answer, as well as here and here. If you instead would like to wait for the task to finish before returning the response, you could run the task in either an external ThreadPool or ProcessPool (depending on the nature of the task) and await it, as explained in this detailed answer.
I would also strongly recommend using the httpx library in an async environment such as FastAPI, instead of using Python requests—you may find details and working examples here, as well as here and here.
I am trying to build REST API with only one call.
Sometimes it takes up to 30 seconds for a program to return a response. But if user thinks that service is lagging - he makes a new call and my app returns response with error code 500 (Internal Server Error).
For now it is enough for me to block any new requests if last one is not ready. Is there any simple way to do it?
I know that there is a lot of queueing managers like Celery, but I prefer not to overload my app with any large dependencies/etc.
You could use Flask-Limiter to ignore new requests from that remote address.
pip install Flask-Limiter
Check this quickstart:
from flask import Flask
from flask_limiter import Limiter
from flask_limiter.util import get_remote_address
app = Flask(__name__)
limiter = Limiter(
app,
key_func=get_remote_address,
default_limits=["200 per day", "50 per hour"]
)
#app.route("/slow")
#limiter.limit("1 per day")
def slow():
return "24"
#app.route("/fast")
def fast():
return "42"
#app.route("/ping")
#limiter.exempt
def ping():
return "PONG"
As you can see, you could ignore the remote IP address for a certain amount of time meanwhile you finish the process you´re running
DOCS
Check these two links:
Flasf-Limiter Documentation
Flasf-Limiter Quick start
I'm building a django app that will provide real time data. I'm fairly new to Django, and now i'm focusing on how to update my data in real time, without having to reload the whole page.
Some clarification: the real time data should be update regularly, not only through a user input.
View
def home(request):
symbol = "BTCUSDT"
tst = client.get_ticker(symbol=symbol)
test = tst['lastPrice']
context={"test":test}
return render(request,
"main/home.html", context
)
Template
<h3> var: {{test}} </h3>
I already asked this question, but i'm having some doubts:
I've been told to use Ajax, and that's ok, but is Ajax good for this case, where i will have a page loaded with data updated in real time every x seconds?
I have also been told to use DRF (Django Rest Framework). I've been digging through it a lot, but what it's not clear to me is how does it work with this particular case.
Here below, I'm giving a checklist of the actions needed to implement a solution based on Websocket and Django Channels, as suggested in a previous comment.
The motivation for this are given at the end.
1) Connect to the Websocket and prepare to receive messages
On the client, you need to execute the follwing javascript code:
<script language="javascript">
var ws_url = 'ws://' + window.location.host + '/ws/ticks/';
var ticksSocket = new WebSocket(ws_url);
ticksSocket.onmessage = function(event) {
var data = JSON.parse(event.data);
console.log('data', data);
// do whatever required with received data ...
};
</script>
Here, we open the Websocket, and later elaborate the notifications sent by the server in the onmessage callback.
Possible improvements:
support SSL connections
use ReconnectingWebSocket: a small wrapper on WebSocket API that automatically reconnects
<script language="javascript">
var prefix = (window.location.protocol == 'https:') ? 'wss://' : 'ws://';
var ws_url = prefix + window.location.host + '/ws/ticks/';
var ticksSocket = new ReconnectingWebSocket(ws_url);
...
</script>
2) Install and configure Django Channels and Channel Layers
To configure Django Channels, follow these instructions:
https://channels.readthedocs.io/en/latest/installation.html
Channel Layers is an optional component of Django Channels which provides a "group" abstraction which we'll use later; you can follow the instructions given here:
https://channels.readthedocs.io/en/latest/topics/channel_layers.html#
3) Publish the Websocket endpoint
Routing provides for Websocket (and other protocols) a mapping between the published endpoints and the associated server-side code, much as urlpattens does for HTTP in a traditional Django project
file routing.py
from django.urls import path
from channels.routing import ProtocolTypeRouter, URLRouter
from . import consumers
application = ProtocolTypeRouter({
"websocket": URLRouter([
path("ws/ticks/", consumers.TicksSyncConsumer),
]),
})
4) Write the consumer
The Consumer is a class which provides handlers for Websocket standard (and, possibly, custom) events. In a sense, it does for Websocket what a Django view does for HTTP.
In our case:
websocket_connect(): we accept the connections and register incoming clients to the "ticks" group
websocket_disconnect(): cleanup by removing che client from the group
new_ticks(): our custom handler which broadcasts the received ticks to it's Websocket client
I assume TICKS_GROUP_NAME is a constant string value defined in project's settings
file consumers.py:
from django.conf import settings
from asgiref.sync import async_to_sync
from channels.consumer import SyncConsumer
class TicksSyncConsumer(SyncConsumer):
def websocket_connect(self, event):
self.send({
'type': 'websocket.accept'
})
# Join ticks group
async_to_sync(self.channel_layer.group_add)(
settings.TICKS_GROUP_NAME,
self.channel_name
)
def websocket_disconnect(self, event):
# Leave ticks group
async_to_sync(self.channel_layer.group_discard)(
settings.TICKS_GROUP_NAME,
self.channel_name
)
def new_ticks(self, event):
self.send({
'type': 'websocket.send',
'text': event['content'],
})
5) And finally: broadcast the new ticks
For example:
ticks = [
{'symbol': 'BTCUSDT', 'lastPrice': 1234, ...},
...
]
broadcast_ticks(ticks)
where:
import json
from asgiref.sync import async_to_sync
import channels.layers
def broadcast_ticks(ticks):
channel_layer = channels.layers.get_channel_layer()
async_to_sync(channel_layer.group_send)(
settings.TICKS_GROUP_NAME, {
"type": 'new_ticks',
"content": json.dumps(ticks),
})
We need to enclose the call to group_send() in the async_to_sync() wrapper, as channel.layers provides only the async implementation, and we're calling it from a sync context. Much more details on this are given in the Django Channels documentation.
Notes:
make sure that "type" attribute matches the name of the consumer's handler (that is: 'new_ticks'); this is required
every client has it's own consumer; so when we wrote self.send() in the consumer's handler, that meant: send the data to a single client
here, we send the data to the "group" abstraction, and Channel Layers in turn will deliver it to every registered consumer
Motivations
Polling is still the most appropriate choice in some cases, being simple and effective.
However, on some occasions you might suffer a few limitations:
you keep querying the server even when no new data are available
you introduce some latency (in the worst case, the full period of the polling). The tradeoff is: less latency = more traffic.
With Websocket, you can instead notify the clients only when (and as soon as) new data are available, by sending them a specific message.
AJAX calls and REST APIs are the combinations you are looking for. For real-time update of data, polling the REST API at regular intervals is the best option you have. Something like:
function doPoll(){
$.post('<api_endpoint_here>', function(data) {
// Do operation to update the data here
setTimeout(doPoll, <how_much_delay>);
});
}
Now add Django Rest Framework to your project. They have a simple tutorial here. Create an API endpoint which will return the data as JSON, and use that URL in the AJAX call.
Now you might be confused because you passed in the data into the template as context, while rendering the page from your home view. Thats not going to work anymore. You'll have to add a script to update the value of the element like
document.getElementById("element_id").value = "New Value";
where element_id is the id you give to the element, and "New Value" is the data you get from the response of the AJAX call.
I hope this gives you a basic context.
I have made a flask application which gets and receives messages from the user and generates a reply from a Chatbot backend I have created. When I run my my app.py file and I go to my localhost it runs fine if I only open one instance, however if I try to open multiple instances then they all try to use the same bot. How do I create a unique bot for each session. I tried using g.bot = mybot() but the problem was it still kept creating a new bot each time the user replied to the bot. I am relatively new to this so links to a detailed explanation would be appreciated. Note some pieces of code are unrelated junk from previous versions.
app = Flask(__name__)
items = ["CommonName","Title",
"Department","Address","City","PhoneNum"]
app.config.from_object(__name__)
bot2 = Bot2()
#app.before_request
def before_request():
session['uid'] = uuid.uuid4()
print(session['uid'])
g.bot2 = Bot2()
#app.route("/", methods=['GET'])
def home():
return render_template("index.html")
#app.route("/tables")
def show_tables():
data = bot2.df
if data.size == 0:
return render_template('sad.html')
return render_template('view.html',tables=[data.to_html(classes='df')], titles = items)
#app.route("/get")
def get_bot_response():
userText = request.args.get('msg')
bot2.input(str(userText))
print(bot2.message)
g.bot2.input(str(userText))
print(g.bot2.message)
show_tables()
if (bot2.export):
return (str(bot2.message) + "<br/>\nWould you like to narrow your results?")
#return (str(bot.message) + "click here" + "</span></p><p class=\"botText\"><span> Would you like to narrow your results?")
else:
return (str(bot2.message))
if __name__ == "__main__":
app.secret_key = 'super secret key'
app.run(threaded=True)
Problem: A new bot is created each time the user replies to the bot
Reason: app.before_request runs before each request that your flask server receives. Hence each reply will create a new Bot2 instance. You can read more about it here.
Problem: Creating a bot per instance
Possible Solution: I'm not really sure what you mean by opening multiple instances (are you trying to run multiple instances of the same server, or there are multiple ppl accessing the single server). I would say to read up on Sessions in Flask and store a Bot2 instance inside sessions as a server side variable. You can read more about it here and here.
You could try assigning the session id from flask-login to the bot instead of creating a unique id at before request. Right now, like #Imma said, a new bot is created for every request.
You will have to store an array of classes. When a session is created, i.e., a user or an anonymous user logs in, a bot/class instance gets created and is pushed onto the array.
You can keep the array in the session object... do note that the session object will get transferred in the form of a cookie to the front end... so you may be potentially exposing all the chat sessions...Also, a lot of users will unnecessarily slow down the response.
Another alternative is to create a separate container and run the bot as a separate microservice rather than integrate it with your existing flask application (this is what we ended up doing)
Delete the line bot2 = Bot2(), and change all the reference to bot2 to g.bot2.
I'm creating an API with Flask that is being used for a mobile platform, but I also want the application itself to digest the API in order to render web content. I'm wondering what the best way is to access API resource methods inside of Flask? For instance if I have the following class added as a resource:
class FooAPI(Resource):
def __init__(self):
# Do some things
super(FooAPI, self).__init__()
def post(self, id):
#return something
def get(self):
#return something
api = Api(app)
api.add_resource(FooAPI, '/api/foo', endpoint = 'foo')
Then in a controller I want:
#app.route("/bar")
def bar():
#Get return value from post() in FooAPI
How do I get the return value of post() from FooAPI? Can I do it somehow through the api variable? Or do I have to create an instance of FooAPI in the controller? It seems like there has to be an easy way to do this that I'm just not understanding...
The obvious way for your application to consume the API is to invoke it like any other client. The fact that the application would be acting as a server and a client at the same time does not matter, the client portion can place requests into localhost and the server part will get them in the same way it gets external requests. To generate HTTP requests you can use requests, or urllib2 from the standard library.
But while the above method will work just fine it seems overkill to me. In my opinion a better approach is to expose the common functionality of your application in a way that both the regular application and the API can invoke. For example, you could have a package called FooLib that implements all the shared logic, then FooAPI becomes a thin wrapper around FooLib, and both FooAPI and FooApp call FooLib to get things done.
Another approach is to have both the app and API in the same Flask(-RESTful) instance. Then, you can have the app call the API methods/functions internally (without HTTP). Let's consider a simple app that manages files on a server:
# API. Returns filename/filesize-pairs of all files in 'path'
#app.route('/api/files/',methods=['GET'])
def get_files():
files=[{'name':x,'size':sys.getsizeof(os.path.join(path,x))} for x in os.listdir(path)]
return jsonify(files)
# app. Gets all files from the API, uses the API data to render a template for the user
#app.route('/app/files/',methods=['GET'])
def app_get_files():
response=get_files() # you may verify the status code here before continuing
return render_template('files.html',files=response.get_json())
You can push all your requests around (from the API to the app and back) without including them in your function calls since Flask's request object is global. For example, for an app resource that handles a file upload, you can simply call:
#app.route('/app/files/post',methods=['POST'])
def app_post_file():
response=post_file()
flash('Your file was uploaded succesfully') # if status_code==200
return render_template('home.html')
The associated API resource being:
#app.route('/api/files/',methods=['POST'])
def post_file():
file=request.files['file']
....
....
return jsonify({'some info about the file upload'})
For large volumes of application data, though, the overhead of wrapping/unwrapping JSON makes Miguel's second solution preferrable.
In your case, you would want to call this in your contoller:
response=FooAPI().post(id)
I managed to achieve this, sometimes API's get ugly, in my case, I need to recursively call the function as the application has a extremely recursive nature (a tree). Recursive functions itself are quite expensive, recursive HTTP requests would be a world of memory and cpu waste.
So here's the snippet, check the third for loop:
class IntentAPI(Resource):
def get(self, id):
patterns = [pattern.dict() for pattern in Pattern.query.filter(Pattern.intent_id == id)]
responses = [response.dict() for response in Response.query.filter(Response.intent_id == id)]
return jsonify ( { 'patterns' : patterns, 'responses' : responses } )
def delete(self, id):
for pattern in Pattern.query.filter(Pattern.intent_id == id):
db.session.delete(pattern)
for response in Response.query.filter(Response.intent_id == id):
db.session.delete(response)
for intent in Intent.query.filter(Intent.context == Intent.query.get(id).set_context):
self.delete(intent.id) #or IntentAPI.delete(self, intent.id)
db.session.delete(Intent.query.get(id))
db.session.commit()
return jsonify( { 'result': True } )