How do I check why a Kubernetes webpage keeps loading? - python

I followed this Google cloud Kubernetes tutorial for python. I basically changed what's in their hello world function to plot with matplotlib (with some other functions beforehand to get data to plot). It all worked (with some changes to the dockerfile, to pip install modules, and use just python 3.7 instead of the slim version) until where it says to view a deployed app. I copy the external IP and try it in the browser, but it just loads. I'm not sure what to check to see why it won't finish loading.
So I'm wondering how I check where the problem is. The python code works fine elsewhere, outputting a plot with flask on a local computer

You can try proxying from your localhost directly to the pod to see if there's a problem with the load balancer.
kubectl port-forward your-pod-xxxxxxxxxx-xxxxx <local-port>:<pod-port>
Then you can just hit http://172.0.0.1:<local-port> on your browser.
You can also take a look at the pod logs:
kubectl logs your-pod-xxxxxxxxxx-xxxxx

Related

Running Docker containers on Azure using docker-py

I can find all of the ingredients for what I want to do, but I'm not sure if I can put them together.
Ultimately, I want a Python process to be able to create and manage Docker instances running on Azure.
This link shows that you can use the Docker API to fire up instances on Azure: https://docs.docker.com/engine/context/aci-integration/. It's Beta, but I've been able to run my own container on Azure after logging in, using something like this:
docker --context myacicontext run hello-world
The second half of my problem is to call this from docker-py. The vanilla usage of docker-py is nice and striaght-forward, but I can't find any reference to the flag "--context" in the docker-py docs (https://docker-py.readthedocs.io/en/stable/).
Is there a way for configuring docker-py such that it provides a --context?
EDIT:
Thanks to #CharlesXu pointing me in the right direction, I have now found that the following docker-py command does have an effect:
docker.context.ContextAPI.set_current_context("myacicontext")
This changes the default context used by the docker cmd line interface, so
C:\Users\MikeSadler>docker ps -a
will subsequently list the containers running in Azure, and not locally.
However, the docker.from_env().containers.list(all=all) command stubbornly continues to return the local containers. This is true even if you restart the Python session and create a new client from a completely fresh start.
CONCLUSION:
Having spoken with the Docker developers, as of October 2020 docker-py officially does not support Cloud connections. They are currently using GRPC protocols to manage Cloud runs, and this may be incorporated into docker-py in the future.
I'm afraid there is no way to do things like the command docker --context myacicontext run hello-world does. And there is also no parameter like --context in the SDK. As I know, you can set the current context use the SDK like this:
import docker
docker.context.config.write_context_name_to_docker_config('context_name')
But when you use the code:
client = docker.from_env()
client.containers.run('xxx')
Then it will set the context into default. It means you cannot run the containers into ACI. I think it may be a bug that needs to be fixed. I'm not very very sure, but that's it right now.

Google Cloud Vision not responding when using gunicorn+Flask

I am new to Google Vision API but i have been working with gunicorn and flask for some time. I installed all the required libraries. i have my api key in environment via gunicorn bash file. Whenever i try to hit gcp API, it just freezes with no response.
Can anybody help?
Here's my gunicorn_start.bash
NAME="test"
NUM_WORKERS=16
PYTHONUNBUFFERED=True
FLASK_DIR=/home/user/fold/API/
echo "Starting $NAME"
cd $FLASK_DIR
conda activate tf
export development=False
export GOOGLE_APPLICATION_CREDENTIALS='/home/user/test-6f4e7.json'
exec /home/user/anaconda3/envs/tf/bin/gunicorn --bind 0.0.0.0:9349 --timeout 500 --worker-class eventlet --workers $NUM_WORKERS app:app
EDIT
It freezes during API call.
Code for API call:
client = vision.ImageAnnotatorClient()
with io.open(path, 'rb') as image_file:
content = image_file.read()
image = vision.types.Image(content=content)
response = client.document_text_detection(image=image)
There is no log as it just freezes,nothing else
The code looks fine and it doesn't seem to be a permission error. Since there are no logs, the issue is hard to troubleshoot; however, I have two theories of what could be happening. I'll leave them below, with some information on how to troubleshoot them.
The API call is not reaching Google's servers
This could be happening due to a networking error. To discard this, try to make a request from the development environment (or where the application is running) using curl.
You can follow the CLI quickstart for Vision API to prepare the environment, and make the request. If it works, then you can discard the network as a possible cause. If the request fails or freezes, then you might need to check the network configuration of your environment.
In addition, you can go to the API Dashboard in the Cloud Console and look at the metrics of the Vision API. In these graphs, you can see if your requests are reaching the server, as well as some useful information like: errors by API method, errors by credential, latency of requests, etc.
There's an issue with the image/document you're sending
Note: Change the logging level of the application to DEBUG (if it's not already at this level).
If you're certain that the requests are reaching the server, the possible issue could be with the file that you're trying to send. If the file is too big, the connection might look as if it was frozen while it is being uploaded, and also it might take some time to be processed. Try with smaller files to see the results.
Also, I noticed that you're currently using a synchronous method to perform the recognition. If the file is too big, you could try the asynchronous annotation approach. Basically, you upload your file(s) to Cloud Storage first and then create a request indicating: the storage URI where your file is located and the destination storage URI where you want the results to be written to.
What you'll receive from the service is an operation Id. With this Id, you can check the status of the recognition request and make your code wait until the process has finished. You can use this example as a reference to implement it.
Hopefully with this information you can determine what is the issue you're facing and solve it.
I had the exact same issue, and it turned out to be gunicorn's fault. When I switched to the Django dev server, the problem went away. I tried with older gunicorn versions (back to 2018), and the problem persisted. I should probably report this to gunicorn. :D
Going to switch to uwsgi in the meantime.

Debug python (in Docker) from VS Code

I have a python application with various dependencies that get resolved during docker-compose build command. A docker image is created, and when run it's a simple REST API that I can access via a browser.
I want to send a GET request and then debug the corresponding method in VS Code. However I'm struggling to get this to work. I'm able to get the docker image running from within VS Code (using Remote-Containers: Open Folder in Container option). I can see the API is up and changes in the code are reflected live.
However I'm struggling to get the debugging part to work.
When I start debugging, I'm asked to provide a Debug Configuration and I'm not sure what the right one to pick or how to set one up....
Please see the documentation on debugging on how to create a debug configuration.

Azure failing to run python script

I built a simple web app using Flask. What this does is basically take data from a form and sends a POST - which is then passed as a command line argument to the script using
os.popen("python3 script.py " + postArgument).read()
The command is stored in a variable which is then passed to an element in a new page with the results.
About the script: It runs the string in the POST through an API, gets some data, processes it, sends it to another API and finally prints the results (which are finally stored in the variable)
It works fine on a local server. But Azure fails to return anything. The string is empty.
How do I get some terminal logs?
Is there a solution?
Per my experience, it seems that the issue was caused by the Python 3 (even for Python 2) interpreter called python on Azure, not python3.
So if you had configured the Python 3 runtime environment for the Application settings on Azure portal as the figure below, please using python script.py instead of python3 script.py in your code.
Or you also can use the absolute path of Python 3 on Azure WebApp D:\Python34\python instead of python3 in your code, as below.
However, I also doubt the another possible issue for you besides the above case. You may use some python packages which be not install using pip on Azure. If so, you need to refer to the section Troubleshooting - Package Installation of the Azure offical document for Python to resolve the possible issue.
Hope it helps. Any concern & update, please feel free to let me know.

How do I setup a bokeh application such that it can be accessed through the internet?

Note from maintainers: This question as originally posed is in regards to the first generation Bokeh server which no longer exists. For information about running modern Bokeh server applications, see Running A Bokeh Server in the docs.
I want to set up an interactive bokeh app, which can be accessed by anyone over the internet.
For understanding, how this works, I am currently trying to get the stocks example running, such that I can access it, for example, from my mobile phone.
I have already tried the following:
opened port 5006 and 5050 and tried to access the App over http:\\<my_global_ip>:<port>
studied the html source of http://docs.bokeh.org/en/latest/docs/server_gallery/stocks_server.html and figure out what's the difference of that source to the generated source code
So far I got the whole example running on the computer, where the bokeh server is running, such that I can access it via localhost:5006/bokeh/stocks/ and localhost:5050/. But as soon as I try to access it from another machine, I see the html content, but not the plot.
Edit:
I'm trying to run the example at https://github.com/bokeh/bokeh/tree/master/examples/deploy because it sounds promising, but because I do not really understand what I'm doing here I would appreciate clarification. I don't get the example working, anyhow. Installation of gunicorn with conda only worked after some headaches and finally the provided commands run, but I do not get any response on port 5006 or port 7001. Perhaps I'm just misunderstanding the example?
Modern Bokeh versions:
You need to specify what websocket origins are permitted to connect:
https://docs.bokeh.org/en/latest/docs/user_guide/server.html#websocket-origin
E.g.
bokeh serve --show --allow-websocket-origin=foo.com sliders.py
For Bokeh version 0.11
Due to changes in the bokeh server now you need to call
bokeh serve sliders.py --host <globalip>:5006
Nothing else is needed.
Please note that you have to change the code for your app as well!
See https://github.com/bokeh/bokeh/blob/master/examples/app/sliders.py for the updated sliders app.

Categories