Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
What is the best way to see where the hiccups are in function speeds?
I know cloud functions deal with cold starts, however, as far as debugging goes, what is the best way to see where the lag is at?
Currently coding in python so any tips in that language would be greatly appreciated.
Please take a look on tips & tricks that describes the best practices for designing, implementing, testing and deploying Cloud Function. To summarize, you should consider the following:
Write idempotent functions
Ensure HTTP functions send an HTTP response
Do not start background activities
Always delete temporary files
The performance section describes best practices for optimizing performance. On your case you should:
Use dependencies wisely
Use global variables to reuse objects in future invocations
Do lazy initialization of global variables
Also, here's an additional resource to understand cold boot time.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I have a piece of software deployed to Kubernetes, and in that I have a scenario when I want to have one function called at a later point in time, but I am not sure my current pod will be the one executing it (it may be scaled down, for example).
So I need help with a mechanism that will enable me to schedule a function for later, on a pod of my software that may or may not be the one that scheduled it, and also a way to decide not to do the execution if some condition was met ahead of time.
Also - I need this to be enabled for thousands of such calls at any given point in time, this is a very fast execution software using Twisted python working on millions of tasks a day. But given the scaling up and down, I cannot just put it on the reactor for later.
Almost any use of a known module, external redis/db is fine.
So - I need this community's help...
Thanks in advance!
You are roughly speaking describing any worker queue system, with Celery as the most common one in Python. With RabbitMQ as the broker it can easily scale to whatever traffic you throw at it. Also check out Dask but I think Dask is baaaaaad so I mention it only for completeness.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I don't get the benefits of using TA-Lib abstract ¿is it speed? ¿less processing? I went through the documentation, examples, and code examples, but can't get it. ¿can anyone explain it?
EDIT: This question is not opinion-based, as you can see in the answer, it's about performance of the function API vs abstract API of ta-lib
This API allows you to list all TA functions implemented in DLL (because python's ta-lib module is just a wrapper around compiled C library) and call them by indicator name (a string), instead of hardcoding switch with 200+ cases and rebuilding your app every time TA-lib adds new indicator. It also allows you to get info about data that indicator requires which may be used to adjust UI for end user asking him to enter the required data. So this API is designed not for a python data scientists who play with single TA indicator, but for desktop and web programmers writing GUI applications or web services addressed to wide audience. There is no any performance benefits.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
New programmer here!
I'm creating my first script on my own, and I have a particular function that is quite large, as in 50 lines.
I understand that theoretically a function can be as large as you need it to be, but etiquette-wise, where is a good place to stay under?
I'm using Python 2.something if that makes a difference.
A good rule of thumb (and it's more a guideline of thumb really) is that you should be able to view the entire function on one screen.
That makes it a lot easier to see the control flow without having to scroll all over the place in whatever editor you're using.
If you can't understand fully what a function does at first glance, it's probably a good idea to refactor chunks of code so that the more detailed steps are placed in their own, well-named, separate function and just called from this one.
However, it's not a hard-and-fast rule, you'll adapt your approach depending on your level of expertise and how complex the code actually is.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
What are the best practices in Django to detect and prevent DoS attacks... Are there any ready to use apps or middleware available which prevents website access and scan through bots?
You might want to read the following 3 questions over on Security Stack Exchange.
A quick description of the problem:
How does DoS/DDoS attack work?
Possible solutions and limitations of attempting mitigation in software:
How can a software application defend against DoS/DDoS?
And a bit of discussion around commonly used anti-DDoS techniques at the perimeter, rather than the application:
What techniques do advanced firewalls use to protect against DoS/DDoS?
It is really difficult to do at the application level - the earlier in the path you can drop the attack, the better.
I'd probably aim to deal with DoS at a higher level in the stack. If you're using Apache, take a look at mod_security. Or maybe a nice set of firewall rules.
Edit: Depending on your situation, you also might want to take a look at a caching server like Varnish. It's a lot harder to DoS you, if the vast majority of hits are served by the lightning quick Varnish before they even reach your regular web server.
The solution is simple, limit API with throttling and auth
The default throttling policy may be set globally, using the DEFAULT_THROTTLE_CLASSES and DEFAULT_THROTTLE_RATES settings.
The quote is from
https://www.django-rest-framework.org/api-guide/throttling/#setting-the-throttling-policy
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
What HTTP framework should I use for a simple application with implied scalability, priferable Pythonic? I would like to be able to smoothly add new features to my app when it has already been deployed.
I'm a big fan of Pylons. It behaves just like a framework should; not excessive on the magic and contains many good components that you can pick-and-choose that help you hit the ground running. It's small and easy to deploy, and requires minimal boilerplate or other syntactic cruft. Scalability seems pretty good -- I've not run into any issues, and major parts of Reddit utilize libraries from Pylons.
Web.py
It might look too simple, but it's a joy to use.
It can be deployed on google appengine. Should scale pretty well. Can be used with any WSGI server.
This is probably one of the most scalable solutions: G-WAN + Python:
http://forum.gwan.com/index.php?p=/discussion/comment/4126/#Comment_4126
Their scalability tests (like the results) are peerless.