I was working on a project that renders images on server and then the server send it to the client. The server is current AWS lambda function which runs a python script. This script has bpy module involved. But the problem is that importing that module is taking too much time. I want to somehow reduce that time so that my overall function costs and runtime is reduced. Any sort of help will be appreciated. Thanks
Related
I'm following this tutorial: https://www.youtube.com/watch?v=EzQArFt_On4
In this tutorial it's only using one python script, what is I need to import some functions from another python script? For example: import script2
I wonder what's the correct way to setup in Glue job? I've tried to store this script in s3 bucket and add the location in editjob -> Security configuration, script libraries, and job parameters (optional)
->Python library path, but it gave me error ModuleNotFoundError: No module named 'script2, does anyone know how to fix this? Thanks.
In the video tutorial there is no such import like import script2. So if you do this in your script and don't provide script2.py library, the import is going to fail with your the message you are getting.
How to write modules, is best explained in Python docs.
The best way to start programming glue jobs is to auto-generate glue scripts by glue console. Then you can use the scripts generated as a starting point for customization. What's more you can setup Glue Endpoints or even run glue locally (or on ec2 instance) for learning and development purposes.
I've got a following setup:
VB.NET Web-Service is running and it needs to regularly call Python script with machine learning model to predict some stuff. To do this my Web-Service generates a file with input for Python and runs Python script as a subprocess. The script makes predictions and returns them, as standard output, back to Web-Service.
The problem is, that the script requires a few seconds to import all the machine learning libraries and load saved model from drive. It's much more than doing actual prediction. During this time Web-Service is blocked by running subprocess. I have to reduce this time drastically.
What I need is a solution to either:
1. Improve libraries and model loading time.
2. Communicate Python script with VB.NET Web-Service and run Python all the time with imports and ML model already loaded.
Not sure I understood the question but here are some things I can think of.
If this is a network thing you should have python compress the code before sending it over the web.
Or if you're able to, use multithreading while reading the file from the web.
Maybe you should upload some more code so we can help you better.
I've found what I needed.
I've used web.py to convert the Python script into a Web-Service and now both VB.NET and Python Web-Services can communicate. Python is running all the time, so there is no delay for loading libraries and data each time a calculation have to be done.
I have written a python program with "csv" file as backend. Actually the algorithm which is involved performs long and deep numerical computations.So I am not able to run it on my local computer.Last time,when I ran it,it took more than 2 hrs but didn't fully execute.Can anyone tell me Is there any way of executing the program with csv file as backend on some servers?
Or is there any other way to do it?
Also,I don't want to use spark for the purpose.
I'm working on a Python project, currently using Django, which does quite a bit of NLP work in a form post process. I'm using the NLTK package, and profiling my code and experimenting I've realised that the majority of the time the code takes is performing the import process of NLTK and various other packages. My question is, is there a way I can have this server start up, do these imports and then just wait for requests, passing them to a function that uses the already imported packages? This would be much faster and less wasteful than performing such imports on every request. If anybody has any ideas to avoid importing large packages on every request, it'd be great if you could help me out!
Thanks,
Callum
Django, under most deployment mechanism, does not import modules for every request. Even the development server only reloads code when it changes. I don't know how you're verifying that all the imports are re-run each time, but that certainly shouldn't be happening.
If you have a python application in production (i.e not already running under a debugger or profiler), is there any way to attach to the python process/instance and examine it? It would be useful to know:
what code is consuming time
what is using memory
Any insight would be appreciated.