I am trying to run the Flask mega-tutorial app on Azure off Docker. The Dockerfile is as given here, first I tried EXPOSE 5000 (as mentioned in this Dockerfile ) but as that lead to ERR_CONNECTION_TIMED_OUT I then tried EXPOSE 80 as suggested here: but the error remained.
Both ports 5000 and 80 in the Dockerfile worked fine off local server. Also, in each case, for Azure the instanceView.state=="Running" but pinging the ip address does not return anything.
The Azure-Docker helloWorld image also runs fine and my Azure CLI commands are exactly the same as in this example except for changing the container registry name etc. Apart from CLI, I tried doing it on the Azure portal as well with same outcome.
Thanks
When there is no issue with your image and it can work fine locally. It should be the port issue if you use the Azure Container Instance.
Azure Container Instances does not currently support port mapping like
with regular docker configuration
It means that if you expose the port 5000 in the container and you should expose the same port in Azure Container Instance group. For more details, see IPs may not be accessible due to mismatched ports. Also, maybe it's better to use the port 80. Hope this will help you. If there is more question you can give me the message.
Test with your application gives in your GitHub. Here is the screenshot of the result:
Related
Hello I coded this website that generates math problems (Here is the code: Here)
It is coded on flask and it is locally being hosted on this link that is not accessible to other people http://127.0.0.1:5000/ .I have a google domain and I want to have a website. What things / services do I need to use. I have been wait to see if I need to use AWS but I think I might need to. I have tried things like transferring it off of flask but I can't. If this is a repost sorry please post there answer thanks -Ben
I am assuming what you're asking is to host your flask web site so others can view it. The address you mention in your post is the local host address for your computer and is only accessible from your own computer. If you only want someone on your same network (WiFi) to access it, you would need to replace "127.0.0.1" with the IP address of your computer. You would also likely have to open up a firewall on your computer to allow the port 5000.
However, if you want anyone on the internet to access your site, there are a ton of ways to do this but since you mentioned AWS, you can do this easily by running a small EC2 instance (virtual server). If you have a new AWS account and have not already run any EC2 in that account, you can actually run a small EC2 instance for free for a whole year. Great for small projects. If you're just getting started with EC2, you may want to go here https://aws.amazon.com/ec2/getting-started/
Basic steps:
Spin up an EC2 instance. Choose the default Amazon Linxu 2 OS type, make sure to create/assign a key pair so you can later ssh into it, make sure the Allow SSH from anywhere setting is checked/selected and the Allow HTTP checkbox is checked (not HTTPS).
Wait for the instance to launch.
Log into your instance by clicking on your ec2 instance in the list of ec2 instnaces and click the Connect button, click the Connect button again (Instance connect tab). If that doesn't work, follow the steps on the SSH client tab.
Install flask
pip3 install flask
Clone your git repo
git clone https://github.com/some0ne14/Math-Ibex.git
Change to your repos' folder
cd Math-Ibex/Math-Practice-Website-master
Edit your main.py so that the app.run line looks like the following (you can do this on GitHub before you run git clone actually or use the nano command to edit the file easily). This allows the system to run on the standard web port 80.
app.run(host='0.0.0.0', port=80, debug=True)
Run the following to start the application. If you want to run it as a service so you can walk away or close the terminal and it will still stay running, just search on here how to run flask as a service.
python3 main.py
You can now connect to your server with any web browser using your EC2 instance's public IP address or generated AWS DNS name (available on the EC2 instnace property page).
Make sure to stop your instance when not using it to save those free runtime minutes.
I am new to linux/aws in general and I am trying to deploy a dash webapp onto an ec2 instance. The webapp is written in python and uses an aws database. I created an EC2 instance, set the security group to allow all traffic, uses the default VPC and internet gateway. I successfully installed the all the app dependencies but anytime I run the app.py file. The public dns doesnt load the webpage. I have tried pinging the public IP and that works. I really have a limited knowledge base hear and have tried different options but cant seem to get it working. Please help :)
Public IP-https://ec2-3-8-100-74.eu-west-2.compute.amazonaws.com/
security group
webapp
I've been smacking my head on this for a couple days and finally got it. I know it's been a while but hopefully this helps someone else. Had a hard time finding answers elsewhere. Very similar to you, I had the ec2 instance set up, the security groups and vpc set up (those steps aren't too difficult and are well-documented). I had some successful pings, but was getting a "connection refused" error through the browser.
The "app.run_server()" parameters were the missing piece for me:
if __name__ == '__main__':
app.run_server(host= '0.0.0.0',port=80)
At that point calling the .py app gave me a 'permission denied,' which I was able to get around by running as sudo ("sudo python3 my_app.py") -- and by sudo pip install-ing necessary packages. (All through ssh, fwiw).
After finally running successfully I was given an IP from the dash app corresponding to my private IPv4 on EC2, and at that point could set my browser to the PUBLIC IPv4 and get to the app. Huzzah.
Playing around with it a little, it looks like as long as you have:
host= '0.0.0.0'
you'll run it online. Without that, it runs only locally (you'll see IP as 127.0.0.1). Then it's a matter of making sure whatever port you're using (:80, :443, :8050) is open according to firewalls and security groups. Dash for me defaults to :8050, and that port might be fine as long as it's allowed through security groups.
QUICK UPDATE:
I tried leaving it on port :8050, and also opened :8050 to all ipv4 in my security group. That let me run everything successfully without using "sudo python3".
if __name__ == '__main__':
app.run_server(host= '0.0.0.0',port=80)
With "python3 my_app.py" in ssh
I am trying to build a simple python based docker container. I am working at a corporate behind a proxy, on Windows 10. Below is my docker file:
FROM python:3.7.9-alpine3.11
WORKDIR ./
RUN pip install --proxy=http://XXXXXXX:8080 -r requirements.txt
COPY . /
EXPOSE 5000
CMD ["python", "application.py"]
But it's giving me the following errors in cmd :
"failed to solve with frontend dockerfile.v0: failed to build LLB: failed to load cache key: failed to do request: Head https://registry-1.docker.io/v2/library/python/manifests/3.7.9-alpine3.11: proxyconnect tcp: EOF"
I've tried to figure out how to configure docker's proxy, using many links but they keep referring to a file "/etc/sysconfig/docker" which I cannot find anywhere under Windows 10 or maybe I'm not looking at the right place.
Also I'm not sure this is only a proxy issue since I've seen people running into this issue without using a proxy.
I would highly appreciate anyone's help. Working at this corporate already made me spend >10 hours doing something that took me 10 minutes to do on my Mac... :(
Thank you
You're talking about the most basic of Docker functionality. Normally, it has to connect to the Docker Hub on the internet to get base images. If you can't make this work with your proxy, you can either
preload your local cache with the necessary images
set up a Docker registry inside your firewall that contains all the images you'll need
Obviously, the easiest thing, probably by far, would be to figure out how to get Docker to connect to Docker Hub through your proxy.
In terms of getting Docker on Windows to work with your proxy, might this help? - https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-docker/configure-docker-daemon
Here's what it says about configuring a proxy:
To set proxy information for docker search and docker pull, create a Windows environment variable with the name HTTP_PROXY or HTTPS_PROXY, and a value of the proxy information. This can be completed with PowerShell using a command similar to this:
In PowerShell:
[Environment]::SetEnvironmentVariable("HTTP_PROXY", "http://username:password#proxy:port/", [EnvironmentVariableTarget]::Machine)
Once the variable has been set, restart the Docker service.
In PowerShell:
Restart-Service docker
For more information, see Windows Configuration File on Docker.com.
I've also seen it mentioned that Docker for Windows allows you to set proxy parameters in its configuration GUI interface.
There is no need to pass proxy information in the Dockerfile.
There are predefined ARGs which can be used for this purpose.
HTTP_PROXY
HTTPS_PROXY
FTP_PROXY
You can pass the details when building the image
https://docs.docker.com/engine/reference/builder/#predefined-args
I do not see any run time dependency of your container on the Internet. So running the container will work without an issue.
I have a docker-compose file. Through my docker-compose, I'm running multiple services. For each service, I'm running the different containers. Among these containers, I have one container which is responsible to get the hardware and network info of the host machine. When I'm running the container in a standalone mode the container is able to provide me with the host IP. But unfortunately, when I'm running it along with other containers (most precisely through the docker-compose file), then I'm not able to get the host network information, rather than I'm always getting the bridge network information (i.e., the docker-compose internal network information). I tried to set the network_mode:host in my service, but unfortunately, when I set it, stops communicating with the other containers. Can anyone please suggest me the way of getting the host network information without tampering the internal communication between different service containers.
You could, perhaps, have this container in two networks. One with the host information, and the other for the 'internal' communication with the containers.
For example:
https://success.docker.com/article/multiple-docker-networks
Without a dockerfile and a docker-compose it's hard to test but I think that if you have a container with network_mode:host, you have to access other container trough the host only, that mean port forwarding, so if you have your container in host mode, and another binded to localhost 8080, I think you should be able to ping localhost:8080 from the host mode container.
Give me feedback about this, please!
Have fun!
I am running a pytorch training on CycleGan inside a Docker image.
I want to use visdom to show the progress of the training (also recommended from the CycleGan project).
I can start a visdom.server inside the docker container and access it outside of the container. But when I try to use the basic example on visdom inside a bash session, of the same container that is running the visdom.server. I get connection refused errors such as The requested URL could not be retrieved.
I think I need to configure the visdom.Visdom() in the example in some custom way to be able to send the data to the server.
Thankful for any help!
Notes
When I start visdom.server it says You can navigate to http://c4b7a2be26c4:8097, when all the examples mentions localhost:8097.
I am trying to do this behind a proxy.
I realised that, in order to curl localhost:8097, I need to use curl --noproxy localhost, localhost:8097. So I will have to do something similar inside visdom.
When setting http_proxy inside a docker container, you need to set no_proxy=localhost, 127.0.0.1 as well in order to allow connections to local host.
Got the same problem, And I found when you use a docker container to connect server, then you can not use the same docker container to run you code