Problem
I have a 2-container docker-compose.yml file.
One of the containers is a small FastAPI app.
The other is just trying to hit the API using Python's requests package.
I can access the app container from outside with the exact same code as is in the Python package trying to hit it, and it works, but it will not work within the package.
docker-compose.yml
version: "3.8"
services:
read-api:
build:
context: ./read-api
depends_on:
- "toy-api"
networks:
- ds-net
toy-api:
build:
context: ./api
networks:
- ds-net
ports:
- "80:80"
networks:
ds-net:
Relevant requests code
from requests import Session
def post_to_api(session, raw_input, path):
print(f"The script is sending: {raw_input}")
print(f"The script is sending it to: {path}")
response = session.post(path, json={"payload": raw_input})
print(f"The script received: {response.text}")
def get_from_api(session, path):
print(f"The datalake script is trying to GET from: {path}")
response = session.get(path)
print(f"The datalake script received: {response.text}")
session = Session()
session.trust_env = False ### I got that from here: https://stackoverflow.com/a/50326101/534238
get_from_api(session, path="http://localhost/test")
post_to_api(session, "this is a test", path="http://localhost/raw")
Running It REPL-Style
If I create an interactive session and run those exact commands above in the requests code portion, it works:
>>> get_from_api(session, path="http://localhost/test")
The script is trying to GET from: http://localhost/test
The script received: {"payload":"Yes, you reached here..."}
>>> post_to_api(session, "this is a test", path="http://localhost/raw")
The script is sending: this is a test
The script is sending it to: http://localhost/raw
The script received: {"payload":"received `raw_input`: this is a test"}
To be clear: the API code is still being run as a container, and that container was still created with the docker-compose.yml file. (In other words, the API container is working properly, when accessed from the host.)
Running Within Container
Doing the same thing within the container, I get the following (fairly long) errors:
read-api_1 | The script is trying to GET from: http://localhost/test
read-api_1 | Traceback (most recent call last):
read-api_1 | File "/usr/local/lib/python3.8/site-packages/urllib3/connection.py", line 159, in _new_conn
read-api_1 | conn = connection.create_connection(
read-api_1 | File "/usr/local/lib/python3.8/site-packages/urllib3/util/connection.py", line 84, in create_connection
read-api_1 | raise err
read-api_1 | File "/usr/local/lib/python3.8/site-packages/urllib3/util/connection.py", line 74, in create_connection
read-api_1 | sock.connect(sa)
read-api_1 | ConnectionRefusedError: [Errno 111] Connection refused
read-api_1 |
read-api_1 | During handling of the above exception, another exception occurred:
.
.
.
read-api_1 | Traceback (most recent call last):
read-api_1 | File "access_api.py", line 99, in <module>
read-api_1 | get_from_api(session, path="http://localhost/test")
read-api_1 | File "access_datalake.py", line 86, in get_from_api
read-api_1 | response = session.get(path)
read-api_1 | File "/usr/local/lib/python3.8/site-packages/requests/sessions.py", line 543, in get
read-api_1 | return self.request('GET', url, **kwargs)
read-api_1 | File "/usr/local/lib/python3.8/site-packages/requests/sessions.py", line 530, in request
read-api_1 | resp = self.send(prep, **send_kwargs)
read-api_1 | File "/usr/local/lib/python3.8/site-packages/requests/sessions.py", line 643, in send
read-api_1 | r = adapter.send(request, **kwargs)
read-api_1 | File "/usr/local/lib/python3.8/site-packages/requests/adapters.py", line 516, in send
read-api_1 | raise ConnectionError(e, request=request)
read-api_1 | requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=80): Max retries exceeded with url: /test (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7ffa9c69b3a0>: Failed to establish a new connection: [Errno 111] Connection refused'))
ai_poc_work_read-api_1 exited with code 1
Attempts to Solve
I thought it was with how the host identified itself within the container group, or whether that origin could be accessed, so I have already tried to change the following, with no success:
Instead of using localhost as the host, I used read-api.
Actually, I started with read-api, and had no luck, but once using localhost, I could at least use REPL on the host machine, as shown above.
I also tried 0.0.0.0, no luck. (I did not expect that to fix it.)
I have changed what CORS ORIGINS are allowed in the API, including all of the possible paths for the container that is trying to read, and just using "*" to flag all CORS origins. No luck.
What am I doing wrong? It seems the problem must be with the containers, or maybe how requests interacts with containers, but I cannot figure out what.
Here are some relevant GitHub issues or SO answers I found, but none solved it:
GitHub issue: Docker Compose problems with requests
GitHub issue: Solving high latency requests in Docker containers
SO problem: containers communicating with requests
Within the Docker network, applications must be accessed with the service names defined in the docker-compose.yml.
If you're trying to access the toy-api service, use
get_from_api(session, path="http://toy-api/test")
You can access the application via http://localhost/test on your host machine because Docker exposes the application to the host machine. However, loosely speaking, within the Docker network, localhost does not refer to the host's localhost but only to the container's localhost. And in the case of the read-api service, there is no application listening to http://localhost/test.
Related
I'm trying to run a simple Flask API, but it's not working as expected. I'm not very experienced in Python, so found the error and solve it have been very challenging. I would appreciate a lot if someone could help.
The system's settings are:
Ubuntu 18.04
Conda environement with python 3.7
And these are the requirements:
$ pip freeze
ansimarkup==1.4.0
asn1crypto==0.24.0
better-exceptions-fork==0.2.1.post6
certifi==2019.3.9
cffi==1.12.3
chardet==3.0.4
Click==7.0
colorama==0.4.1
cryptography==2.7
Flask==1.0.3
idna==2.8
itsdangerous==1.1.0
Jinja2==2.10.1
loguru==0.2.5
MarkupSafe==1.1.1
pycparser==2.19
Pygments==2.4.2
pyOpenSSL==19.0.0
PySocks==1.7.0
requests==2.22.0
six==1.12.0
urllib3==1.24.2
Werkzeug==0.15.4
My project structure is like this:
├── statsapi
│ ├── data_store.py
├── app.py
├── client.py
├── requirements.txt
Here is the app.py code:
#!/usr/bin/env python
from flask import Flask, request, jsonify
from loguru import logger
from statsapi import data_store
app = Flask(__name__)
# Creating an endpoint
#app.route("/data", methods=["POST"])
def save_data():
# setting log for this action
logger.info(f"Saving data...")
# transform content requisition to json
content = request.get_json()
# save in a module just the "data" field
# The uuid of the data
uuid = data_store.save(content["data"])
# set log for las action
logger.info(f"Data saved with UUID `{uuid}` successfully")
# define information to be returned
return jsonify({"status": "success",
"message": "data saved successfully",
"uuid": uuid})
if __name__ == "__main__":
app.run(host="0.0.0.0", debug=True)
The data_store.py code:
#!/usr/bin/env python
from uuid import uuid4
# Create a dictionary to keep things in memory
_in_memory_storage = dict()
# Save received data in memory giving an uuid
def save(data):
data_uuid = uuid4()
_in_memory_storage[data_uuid] = data
return data_uuid
And the client.py code:
#!/usr/bin/env python
import requests
def send(data):
response = requests.post("http://localhost:5000/data", json={"data": data})
print(response.json())
def main():
send([1, 2, 3, 4])
if __name__ == "__main__":
main()
The client.py should send some data to API, but when called it returns this long error message:
$ python client.py
Traceback (most recent call last):
File "/home/bruno/anaconda3/envs/statsapi/lib/python3.7/site-packages/urllib3/connection.py", line 159, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw)
File "/home/bruno/anaconda3/envs/statsapi/lib/python3.7/site-packages/urllib3/util/connection.py", line 80, in create_connection
raise err
File "/home/bruno/anaconda3/envs/statsapi/lib/python3.7/site-packages/urllib3/util/connection.py", line 70, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/bruno/anaconda3/envs/statsapi/lib/python3.7/site-packages/urllib3/connectionpool.py", line 600, in urlopen
chunked=chunked)
File "/home/bruno/anaconda3/envs/statsapi/lib/python3.7/site-packages/urllib3/connectionpool.py", line 354, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/home/bruno/anaconda3/envs/statsapi/lib/python3.7/http/client.py", line 1229, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/home/bruno/anaconda3/envs/statsapi/lib/python3.7/http/client.py", line 1275, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/home/bruno/anaconda3/envs/statsapi/lib/python3.7/http/client.py", line 1224, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/home/bruno/anaconda3/envs/statsapi/lib/python3.7/http/client.py", line 1016, in _send_output
self.send(msg)
File "/home/bruno/anaconda3/envs/statsapi/lib/python3.7/http/client.py", line 956, in send
self.connect()
File "/home/bruno/anaconda3/envs/statsapi/lib/python3.7/site-packages/urllib3/connection.py", line 181, in connect
conn = self._new_conn()
File "/home/bruno/anaconda3/envs/statsapi/lib/python3.7/site-packages/urllib3/connection.py", line 168, in _new_conn
self, "Failed to establish a new connection: %s" % e)
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7fd0c7a26a58>: Failed to establish a new connection: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/bruno/anaconda3/envs/statsapi/lib/python3.7/site-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/home/bruno/anaconda3/envs/statsapi/lib/python3.7/site-packages/urllib3/connectionpool.py", line 638, in urlopen
_stacktrace=sys.exc_info()[2])
File "/home/bruno/anaconda3/envs/statsapi/lib/python3.7/site-packages/urllib3/util/retry.py", line 399, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=5000): Max retries exceeded with url: /data (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fd0c7a26a58>: Failed to establish a new connection: [Errno 111] Connection refused'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "client.py", line 17, in <module>
main()
File "client.py", line 13, in main
send([1, 2, 3, 4])
File "client.py", line 7, in send
response = requests.post("http://localhost:5000/data", json={"data": data})
File "/home/bruno/anaconda3/envs/statsapi/lib/python3.7/site-packages/requests/api.py", line 116, in post
return request('post', url, data=data, json=json, **kwargs)
File "/home/bruno/anaconda3/envs/statsapi/lib/python3.7/site-packages/requests/api.py", line 60, in request
return session.request(method=method, url=url, **kwargs)
File "/home/bruno/anaconda3/envs/statsapi/lib/python3.7/site-packages/requests/sessions.py", line 533, in request
resp = self.send(prep, **send_kwargs)
File "/home/bruno/anaconda3/envs/statsapi/lib/python3.7/site-packages/requests/sessions.py", line 646, in send
r = adapter.send(request, **kwargs)
File "/home/bruno/anaconda3/envs/statsapi/lib/python3.7/site-packages/requests/adapters.py", line 516, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=5000): Max retries exceeded with url: /data (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fd0c7a26a58>: Failed to establish a new connection: [Errno 111] Connection refused'))
As I said, I'm far away from to be a python expert, so I will be many thankful for any help.
Might not exactly relate to the question, but it might help someone facing similar issue.
I had similar issue, but in my case it was because of docker containers. When container #1 is connecting to container #2 using requests api it was failing as both the containers are not in a single network.
So I modified my docker-compose file to
networks:
default:
external: true
name: <network_name>
Make sure to create the docker network in advance and modify the host to docker service name of the container #2 while connecting using requests.
So use http://<docker_service_name>:8000/api instead of http://127.0.0.1:8000/api
i think this might be your port is already serve by another apps, try to change flask port.
i used your code in local and successfully
try something like this
main.py
import data_store
from flask import Flask, request, jsonify
app = Flask(__name__)
# Creating an endpoint
#app.route("/data", methods=["POST"])
def save_data():
# setting log for this action
print(f"Saving data...")
# transform content requisition to json
content = request.get_json()
# save in a module just the "data" field
# The uuid of the data
uuid = data_store.save(content["data"])
# set log for las action
print(f"Data saved with UUID `{uuid}` successfully")
# define information to be returned
return jsonify({"status": "success",
"message": "data saved successfully",
"uuid": uuid})
if __name__ == "__main__":
app.run(port=8000)
data_store.py
from uuid import uuid4
# Create a dictionary to keep things in memory
_in_memory_storage = dict()
# Save received data in memory giving an uuid
def save(data):
data_uuid = uuid4()
_in_memory_storage[data_uuid] = data
print(f"Cached value => {_in_memory_storage}")
return data_uuid
client.py
import requests
def send(data):
response = requests.post("http://localhost:8000/data", json={"data": data})
print(response.json())
def main():
send([1, 2, 3, 4])
if __name__ == "__main__":
main()
and when it test successfully like this
→ python client.py
{'message': 'data saved successfully', 'status': 'success', 'uuid': '050b02c2-e67e-44f7-b6c9-450106d8b40e'}
flask logger
→ python main.py
* Serving Flask app "main" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://127.0.0.1:8000/ (Press CTRL+C to quit)
Saving data...
Cached value => {UUID('050b02c2-e67e-44f7-b6c9-450106d8b40e'): [1, 2, 3, 4]}
Data saved with UUID `050b02c2-e67e-44f7-b6c9-450106d8b40e` successfully
127.0.0.1 - - [06/Jun/2021 12:51:35] "POST /data HTTP/1.1" 200 -
How do I connect to remote docker host using python?
>>> from docker import Client
>>> cli = Client(base_url='tcp://52.90.216.176:2375')
>>>
>>> cli.containers()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/site-packages/docker/api/container.py", line 69, in containers
res = self._result(self._get(u, params=params), True)
File "/usr/local/lib/python2.7/site-packages/docker/utils/decorators.py", line 47, in inner
return f(self, *args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/docker/client.py", line 112, in _get
return self.get(url, **self._set_request_timeout(kwargs))
File "/usr/local/lib/python2.7/site-packages/requests/sessions.py", line 480, in get
return self.request('GET', url, **kwargs)
File "/usr/local/lib/python2.7/site-packages/requests/sessions.py", line 468, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python2.7/site-packages/requests/sessions.py", line 576, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python2.7/site-packages/requests/adapters.py", line 437, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='52.90.216.176', port=2375): Max retries exceeded with url: /v1.21/containers/json?all=0&limit=-1&trunc_cmd=0&size=0 (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7fd87d836750>: Failed to establish a new connection: [Errno 111] Connection refused',))
If I log-in to 52.90.216.176 and use the following:
>>> cli = Client(base_url='unix://var/run/docker.sock')
this works. But how do I connect to docker running on another server?
It sounds like you're using docker-py.
Also, it sounds like maybe you're not familiar with TLS, so please read the documentation for using TLS with docker-py. You may need to download your TLS files and copy them local to the docker-py client as they are used to authenticate that you are authorized to connect to the Docker daemon.
I hope your remote Docker daemon is not exposed to the world.
If it is not running TLS (exposed to the world):
client = docker.Client(base_url='<https_url>', tls=False)
If it is secured with TLS (not exposed to the world):
client = docker.Client(base_url='<https_url>', tls=True)
This is not answer, but need your feedback.
The error message is: Connection refused, so can you run the command:
telnet 52.90.216.176 2375
To confirm if there is no firewall issue. Sometime the port is 2376
Add tcp option to sys config as shown here:
vi /etc/sysconfig/docker
OPTIONS="--host=tcp://0.0.0.0:2375"
After restarting docker, I could connect to remote docker server using python.
I'm trying to setup a WebDAV connection using easywebdav in Python. (Using 2.7.8 for now)
import csv, easywebdav
webdav=easywebdav.connect('https://sakai.rutgers.edu/dav/restoftheurl,username="",password="")
print webdav.ls()
Though when I run this I get the following error message. My guess is that it possibly has something to do with the URL using HTTPS?
Traceback (most recent call last):
File "/home/willkara/Development/SakaiStuff/WorkProjects/sakai-manager/file.py", line 4, in <module>
print webdav.ls()
File "build/bdist.linux-x86_64/egg/easywebdav/client.py", line 176, in ls
File "build/bdist.linux-x86_64/egg/easywebdav/client.py", line 97, in _send
File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 456, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 559, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python2.7/dist-packages/requests/adapters.py", line 375, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='https', port=80): Max retries exceeded with url: //sakai.rutgers.edu/dav/url:80/. (Caused by <class 'socket.gaierror'>: [Errno -2] Name or service not known)
[Finished in 0.1s with exit code 1]
I find it strange that you combine HTTPS protocol and port 80. HTTPS uses port 443.
Though the error message "Name or service not known" would rather indicate that the hostname sakai.rutgers.edu is not recognized on your system. Try to ping the host.
I noticed that you shouldn't have http:// or https:// in the beginning of your adress, only the host name. You select protocol with protocol='https'. Also, I couln't get it to work if I added the path the url, I had to use it as argument to the operations like easywebdav.ls('/dav/restoftheurl') or easywebdav.cd('/dav/restoftheurl').
The following code works in the Python interactive shell:
import urllib2
result = urllib2.urlopen("http://www.google.com/")
and gives a 200 result.
If I run the same code in an AppEngine app running locally with the development server, it fails with the following error:
URLError: <urlopen error An error occured while connecting to the server:
Unable to fetch URL: http://www.google.com/
Error: [Errno 11004] getaddrinfo failed>`
I've tried using the urlfetch library directly:
from google.appengine.api import urlfetch
result = urlfetch.fetch("http://www.google.com")
and this also fails (which makes sense, as I believe urllib2 within AppEngine calls URLFetch internally?)
I can clearly access the URL from my local machine - so what's happening?
UPDATE: the relevant stack trace:
File "c:\dev\repos\stackoverflow\main.py", line 40, in get_latest_comments
result = urlfetch.fetch("http://www.google.com")
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\api\urlfetch.py", line 266, in fetch
return rpc.get_result()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\api\apiproxy_stub_map.py", line 604, in get_result
return self.__get_result_hook(self)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\api\urlfetch.py", line 397, in _get_fetch_result
raise DownloadError("Unable to fetch URL: " + url + error_detail)
DownloadError: Unable to fetch URL: http://www.google.com Error: [Errno 11004] getaddrinfo failed
DO you have a proxy configured with environment variables? The dev_appserver clears all env vars.
I ran into an error that was painful to track down, so I thought I'd add the cause + "solution" here.
The setup:
Devbox - Running Google App Engine listening on all ports ("--address=0.0.0.0") serving a URL that launches a task.
Client - Client (Python requests library) which queries the callback URL
App Engine code:
class StartTaskCallback(webapp.RequestHandler):
def post(self):
param = self.request.get('param')
logging.info('STARTTASK: %s' % param)
# launch a task
taskqueue.add(url='/tasks/mytask',
queue_name='myqueue',
params={'param': param})
class MyTask(webapp.RequestHandler):
def post(self):
param = self.request.get('param')
logging.info('MYTASK: param = %s' % param)
When I queried the callback with my browser, everything worked, but the same query from the remote client gave me the following error:
ERROR 2012-03-23 21:18:27,351 taskqueue_stub.py:1858] An error occured while sending the task "task1" (Url: "/tasks/mytask") in queue "myqueue". Treating as a task error.
Traceback (most recent call last):
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/api/taskqueue/taskqueue_stub.py", line 1846, in ExecuteTask
connection.endheaders()
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/httplib.py", line 868, in endheaders
self._send_output()
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/httplib.py", line 740, in _send_output
self.send(msg)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/httplib.py", line 699, in send
self.connect()
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/httplib.py", line 683, in connect
self.timeout)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/socket.py", line 498, in create_connection
for res in getaddrinfo(host, port, 0, SOCK_STREAM):
gaierror: [Errno 8] nodename nor servname provided, or not known
This error would just spin in a loop as the task retried. Though oddly, I could go to Admin -> Task Queues and click 'Run' to get the task to complete successfully.
At first I thought this was an error with the binding. I would not get an error if I queried the StartTaskCallback via the browser or if I ran the client locally.
Finally I noticed that App Engine is using the 'host' field of the request in order to build an absolute URL for the task. In /Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/api/taskqueue/taskqueue_stub.py (1829):
connection_host, = header_dict.get('host', [self._default_host])
if connection_host is None:
logging.error('Could not determine where to send the task "%s" '
'(Url: "%s") in queue "%s". Treating as an error.',
task.task_name(), task.url(), queue.queue_name)
return False
connection = httplib.HTTPConnection(connection_host)
In my case, I was using a special name + hosts file on the remote client to access the server.
192.168.1.208 devbox
So the 'host' for the remote client looked like 'devbox:8085' which the local server could not resolve.
To fix the issue, I simply added devbox to my AppEngine server's hosts file, but it sure would have been nice if the gaierror exception had printed the name it failed to resolve, or if App Engine didn't use the 'host' of the incoming request to build a URL for task creation.