I've recently started working on a project in Azure Functions that has two components.
Both components use the same shared code, however the plans these components use seem to differ.
An API endpoint for web users for controlling objects saved in a CosmosDB - Would benefit from a Service App plan because of the cold/warm issues with Consumption plan.
A backend scheduled process that uses these objects - Would benefit from a Consumption plan, seeing as it could use automatic scaling, and exact execution time is not so important.
I thought of using the premium plan to solve both issues, but it seems pretty expensive (my workload is pretty low, and from the calculator on Azure's page it looks like around 150$ a month by default - correct me if I'm wrong).
I was wondering if there is a way to split a function app into two plans, or have two function apps share code.
Thanks!
I was wondering if there is a way to split a function app into two plans, or have two function apps share code.
The first options is not something that can be done. The second option is the one to go for here. You can have the shared code in a separate Class Library project and reference it from both Function Apps.
A class library defines types and methods that are called by an application. If the library targets .NET Standard 2.0, it can be called by any .NET implementation (including .NET Framework) that supports .NET Standard 2.0. If the library targets .NET 5, it can be called by any application that targets .NET 5.
When you create a class library, you can distribute it as a NuGet package or as a component bundled with the application that uses it.
More information: Tutorial: Create a .NET class library using Visual Studio
EDIT:
For Python, you can create modules.
Python has a way to put definitions in a file and use them in a script or in an interactive instance of the interpreter. Such a file is called a module; definitions from a module can be imported into other modules or into the main module (the collection of variables that you have access to in a script executed at the top level and in calculator mode).
More info: 6. Modules
Related
I am a C# developer who has been tasked with converting some deployed C# Azure functions (mostly webhooks / SB) to Python.
I am struggling with the concept of the python equivalency to dependency injection.
Take for example an api client class that makes calls continuously to some 3rd party API to push and pull data. In .NET if i had a webhook function that needed to use this api client, i would initiate a singleton service in the startup.cs class, and inject it into my azure webhook function. This is advantageous cause i can handle the token refreshing and what not inside of the service class itself and have the token stored in memory, instead of having to re-create an instance of the api client class each time the webhook is fired.
How do i do this in Python? Or what is the right method of doing something similar in a similar environment (Azure functions) where we store tokens in memory AND create a service once and use the same service across multiple functions?
Thanks
While not being using widely because the way python is setup, dependency injection can be implemented using python. There is a python package called dependency-injector which helps in implementing di.
We should not store the tokens in local memory of the function as functions are stateless and serverless what that means is we don't have either control or knowledge about the servers on which our function is running thus we don't know how long they will be stored in the memory instead of this once you acquire the tokens add them to the azure keyvault, so that you can use them frequently.
you can share a python class among the functions just place them a separate folder and import them in your functions.
from ..common_files import test_file
Refer this article by Szymon miks for examples of dependency Injection in python
Refer this documentation on the sharing file between functions.
Refer the following documentation on azure keyvault about retrieving keys
I'm writing a Python application that will be installed at multiple sites. At each of those sites it needs to interface to other software with a different api at each site but all logically doing the same thing.
My solution is to create a base Class that encapsulates the common logic and provides a single interface back to the main app, and then separate subclasses for each different site-specific api. The base class and subclasses would all be defined in an "interfaces" package deployed to each site alongside the main app. To ensure a common code base (for ease of deployment and maintenance) all sites would have identical code, both the main app package and the interfaces package.
My question is, how best to ensure the main app uses the correct subclass from the interfaces package?
My current thoughts are to have the main app call a "get_interface" function in the interfaces package which reads an ini file to identify which interface subclass is in use at that site and returns that subclass to the main app. Obviously this requires the ini file to be site-specific, but that is the only thing that would be.
Is that the best approach?
To add some more concrete info as requested:
The application is astronomy-related. It aims to automate a pipeline from target identification, through scheduling of telescope imaging sessions, to processing of the resultant images. It makes extensive use of Astropy and affiliated packages.
There are several areas where it needs to interface to other software. Eg:
Target Identification, where it gets target info from various astronomical databases through web services.
Platesolving, where it uploads image data to either local (Windows or Linux-based) applications or remote web services such as astrometry.net.
Imaging, where it needs to interface to commercial telescope/observatory control packages (eg ACP or ASA Sequence) to actually perform imaging sessions.
The logical interfaces are all pretty much identical within each of these areas so, in the above cases, I envisage 3 classes to provide abstractions, which would be implemented through sub classes specific to the target system I’m interfacing with.
I’m really just exploring the best design pattern for how to implement this in a way that keeps the core of the application totally isolated from the implementation of the interfaces to the other systems (as these change fairly frequently).
In a way it’s a bit like using device drivers. The core application needs to call for a service to be performed (not caring how that is achieved) with that call being routed to the “driver” (my appropriate interface sub class) which would be potentially different for each site implementation.
I'm using Azure function apps with Python. I have two dozen function apps that all use a Postgres DB and Custom Vision. All function apps are setup as HttpTriggers. Right now, when a function is triggered, a new database handler (or custom vision handler) object is created, used and terminated when the function app call is done.
It seems to be very counterproductive to instantiate a new objects on every single request that comes in. Is there a way to instantiate shared objects once and then pass them to a function when they are called?
In general, Azure Functions are intended to be stateless and not share objects from one invocation to the next. However, there are some exceptions.
Sharing Connection Objects
Azure Docs recommend the Improper Instantiation Pattern for sharing of connection objects that are intended in an application to be opened once and used again and again.
There are some things to keep in mind for this to work for you, mainly:
The key element of this antipattern is repeatedly creating and destroying instances of a shareable object. If a class is not shareable (not thread-safe), then this antipattern does not apply.
They have some walkthroughs there that will probably help you. Since your question is fairly generic, the best I can do is recommend you read through it and see if that will help you.
Durable Functions
The alternative is to consider the Durable Functions instead of the standard. They are intended to be able to pass objects between functions making them not quite stateless.
Durable Functions is an advanced extension for Azure Functions that isn't appropriate for all applications. This article assumes that you have a strong familiarity with concepts in Azure Functions and the challenges involved in serverless application development.
In terms of the Quotas/Usage limits per instance, is there any considerable improvement/advantage when using golang in Google appengine GAE instead of other offered language that run within GAE like python, java,php or all of them behave the same?
Or basically any instance no matter the language in use, behave the same way and can handle barely the same amount of maximum requests/sec per instance considering that this concerns more to the "GAE load balancer" or infrastructure, rather than the used programming language, same logic could applied to the memory,cpu usage?
App Engine doesn't have explicit limits or restrictions that would apply only when using a specific language. However the languages and their technologies might imply certain limitations, for example a Java Virtual Machine instance by itself requires significantly more memory and has significantly higher startup time (even when warmup requests are enabled) than starting the built-in web server of Go, so in case of a Java instance less memory will remain for the webapp itself to allocate and use (for a specific plan/type and instance).
I don't have concrete measures to compare, but (in case of Go):
"Code is deployed in source form and compiled in the cloud... Go is the first true compiled language that runs on App Engine. Go on App Engine makes it possible to deploy efficient, CPU-intensive web applications". (source)
If you think about it, other languages at App Engine are all interpreted (including Java which is byte code interpreted by a Virtual Machine) while Go is compiled into and runs as platform dependent native code. This should already tell something about performance.
For a "case-study" check out the following blog post:
From zero to Go: launching on the Google homepage in 24 hours
This blog also contains some performance report of a real-world app used by millions:
This chart - taken directly from the App Engine dashboard - shows average request latency during launch. As you can see, even under load it never exceeds 60 ms, with a median latency of 32 milliseconds. This is wicked fast, considering that our request handler is doing image manipulation and encoding on the fly.
App Engine uses the web server that is included in the Go standard library to serve your app, so that also means you can easily port a Go web app to App Engine, and that you know exactly what to expect from the web server serving your app on App Engine.
Found Official time comparisions of Python, Java and Go
The App Engine System Status can be considered official and a good comparision base.
You can click on any cells belonging to a specific day and language, and you get detailed historical statistics for Static and Dynamic GET latency (both secure and unsecure), Error rates, CPU usage/latency. These statistics are measured on an instance that is already up and ready to serve.
Analysing it for the day of January 27, 2015 here are the conclusions for Go, Java and Python:
Dynamic latency is roughly the same for all
CPU latency (to compute the 33rd Fibonacci number) is best for Java, then Go and slowest is Python.
Static file serving time is roughly the same but Go is fastest.
I am working on a system where a bunch of modules connect to a MS SqlServer DB to read/write data. Each of these modules are written in different languages (C#, Java, C++) as each language serves the purpose of the module best.
My question however is about the DB connectivity. As of now, all these modules use the language-specific Sql Connectivity API to connect to the DB. Is this a good way of doing it ?
Or alternatively, is it better to have a Python (or some other scripting lang) script take over the responsibility of connecting to the DB? The modules would then send in input parameters and the name of a stored procedure to the Python Script and the script would run it on the database and send the output back to the respective module.
Are there any advantages of the second method over the first ?
Thanks for helping out!
If we assume that each language you use will have an optimized set of classes to interact with databases, then there shouldn't be a real need to pass all database calls through a centralized module.
Using a "middle-ware" for database manipulation does offer a very significant advantage. You can control, monitor and manipulate your database calls from a central and single location. So, for example, if one day you wake up and decide that you want to log certain elements of the database calls, you'll need to apply the logical/code change only in a single piece of code (the middle-ware). You can also implement different caching techniques using middle-ware, so if the different systems share certain pieces of data, you'd be able to keep that data in the middle-ware and serve it as needed to the different modules.
The above is a very advanced edge-case and it's not commonly used in small applications, so please evaluate the need for the above in your specific application and decide if that's the best approach.
Doing things the way you do them now is fine (if we follow the above assumption) :)