pyftpdlib Networkprotocol Error - python

I am using pyftpdlib and pymongo to build a FTP server with GridFS.
Locally everything is working great.
Now I want to run the server using Docker. I am using the Dockerimage python:3.6-alpine and a mongo:latest image.
I run the ftp with:
docker run -it --rm -p 21:21 ftpimage
And the mongo image with:
docker run -it --rm mongo
Then I connect with:
ftp localhost
Login is working and pwd comand aswell. But when I run ls I get the following error:
522 Network protocol not supported (use 1).
500 Command "LPRT" not understood.
ftp: bind: Address already in use
I was looking through the RFCs and use 1 means IPv4. But I don't use anything else.
The FTP-Server doesn't list any erorrs. Just my ftp client. And I don't know why it uses IPv6.
When I enter sudo netstat -lptu I get this:
tcp6 0 0 [::]:ftp [::]:* LISTEN 4972/docker-proxy
Can anybody tell me where this comes from? I haven't setup any IPv6 stuff.
Thanks for any help :)

Related

Dronekit-python running in docker connecting to MAVProxy on host

I am using dronekit-python in a docker container and am attempting to connect to an instance of MAVProxy running on my host machine (Mac OSX) using the following command:
vehicle = connect('udp:host.docker.internal:14551', wait_ready=True)
but am getting the following error:
File "/usr/local/lib/python3.7/site-packages/pymavlink/mavutil.py", line 1015, in __init__
self.port.bind((a[0], int(a[1])))
OSError: [Errno 99] Cannot assign requested address
Does anyone know what the issue is here? I am able to successfully connect using the above command when I run the python script locally on host but not when I have it running in a docker container.
I found a similar stackoverflow question here but the accepted answer did not work for me. Not sure if I need to be exposing ports or something like that.
Here is the command that I am running on my host machine to kick off MAVProxy:
mavproxy.py --master=127.0.0.1:14550 --out udp:127.0.0.1:14551 --out udp:10.55.222.120:14550 --out udp:127.0.0.1:14552
I ended up getting MAVProxy on host and dronekit-python in the docker flask container properly connected.
Seemus790's answer in this gitter thread did the trick.
Working solution:
MAVProxy on host machine (Mac OS in my case)
mavproxy.py --master=127.0.0.1:14550 --out udp:127.0.0.1:14551 --out udp:10.55.222.120:14550 --out=tcpin:0.0.0.0:14552
dronekit-python command in docker container:
vehicle = connect('tcp:host.docker.internal:14552', wait_ready=True)
The trick was the --out=tcpin:0.0.0.0:14552 part of the mavproxy command which is documented here

How can i run a portia spider by its port?

I am trying to run a spider with portia in its docker version but i don't want to execute the spider using a terminal command like docker exec ... portiacrawl .... Is there any way I can run the spider, that is already created, by making a request at its localhost port and save it in an specific folder?
Something like:
https://localhost:9001/execute/spider_name/folder_path
Example of my own usage:
First what I do is run the container and leave it running, because i cant stop it for other reasons:
docker run -i -t -d --rm -v <PROJECTS_FOLDER>:/app/data/projects:rw -p 9001:9001 scrapinghub/portia
Next I execute the portiacrawl:
docker exec <CONTAINER_ID> portiacrawl <PROJECT_NAME_PATH> <SPIDER_NAME> -o /some/path/in/my/pc/<SPIDER_NAME>.json
Now, what i want is to replace the docker exec step with som http request to the localhost server that is running.
Thanks very much for your time
Yes, you can by doing a port mapping. While starting a docker container you wont have any ports published publicly or exposed internally unless you told docker to do so.
For example:
if you wish to expose a port internally (inside the docker network itself, you need to add EXPOSE in the dockerfile)
if you wish to publish a port publicly that can be access either through localhost or the public ip you can use -p option along with passing the ports so in your case it will be like this:
docker run -p 9001:9001 imagename
The command above will tell docker that you would like to do port mapping from 9001 (using localhost or any other interface) to 9001 (inside the container and you can change the ports according to your actual setup).
If you wish to expose it to localhost only you can change the command to something like this:
docker run -p 127.0.0.1:9001:9001 imagename
For more information check the following docs
According to the updated question, the other and safest way to accomplish this will be implementing an API inside portiacrawl that can be called through HTTP to do the needed tasks instead of using docker exec

get a python docker container to interact with a redis docker container

I'm new to docker, redis and any kind of networking, (I know python at least!). Firstly I have figured out how to get a redis docker image and run it in a docker container:
docker run --name some-redis -d redis
As I understand this redis instance has port 6379 available to connect to other containers.
docker network inspect bridge
"Containers": {
"2ecceba2756abf20d5396078fd9b2ecf0d60ab04ca6b8df5e1b631b6fb5e9a85": {
"Name": "some-redis",
"EndpointID": "09f0069dae3632a2456cb4d82ad5e7c9782a2b58cb7a4ee655f57b5c410c3e87",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
If I run the following command I can interact with the redis instance and generate key:value pairs:
docker run -it --link some-redis:redis --rm redis redis-cli -h redis -p 6379
set 'a' 'abc'
>OK
get 'a'
>"abc"
quit
I have figured out how to make and run a docker container with the redis library installed that will run a python script as follows:
Here is my Dockerfile:
FROM python:3
ADD redis_test_script.py /
RUN pip install redis
CMD [ "python", "./redis_test_script.py" ]
Here is redis_test_script.py:
import redis
print("hello redis-py")
Build the docker image:
docker build -t python-redis-py .
If I run the following command the script runs in its container:
docker run -it --rm --name pyRed python-redis-py
and returns the expected:
>hello redis-py
It seems like both containers are working ok, the problem is connecting them both together, I would like to ultimately use python to perform operation on the redis container. If I modify the script as follows and rebuild the image for the python container it fails:
import redis
print("hello redis-py")
r = redis.Redis(host="localhost", port=6379, db=0)
r.set('z', 'xyz')
r.get('z')
I get several errors:
...
OSError: [Errno 99] Cannot assign requested address
...
redis.exceptions.ConnectionError: Error 99 connecting to localhost:6379. Cannot assign requested address.
.....
It looks like they're not connecting, I tried again using the bridge IP in the python script:
r = redis.Redis(host="172.17.0.0/16", port=6379, db=0)
and get this error:
redis.exceptions.ConnectionError: Error -2 connecting to 172.17.0.0/16:6379. Name or service not known.
and I tried the redis sub IP:
r = redis.Redis(host="172.17.0.2/16", port=6379, db=0)
and I get this error:
redis.exceptions.ConnectionError: Error -2 connecting to 172.17.0.2/16:6379. Name or service not known.
It feels like I'm fundamentally misunderstanding something about how to get the containers to talk to each other. I've read quite a lot of documentation and tutorials but as I say have no networking experience and have not previously used docker so any helpful explanations and/or solutions would be really great.
Many thanks
That's all about Docker networking. Fast solution - use host network mode for both containers. Drawback is low isolation, but you will get it working fast:
docker run -d --network=host redis ...
docker run --network=host python-redis-py ...
Then to connect from python to redis just use localhost as a hostname.
Better solution is to use docker user-defined bridge network
# create network
docker network create foo
docker run -d --network=foo --name my-db redis ...
docker run --network=foo python-redis-py ...
Note that in this case you cannot use localhost but instead use my-db as a hostname. That's why I've used --name my-db parameter when starting first container. In user-defined bridge networks containers reach each other by theirs names.
Do:
Explicitly create a Docker network for your application, and run your containers connected to that network. (If you use Docker Compose, this happens for you automatically and you don’t need to do anything.)
docker network create foo
docker run -d --net foo --name some-redis redis
docker run -it --rm --net foo --name pyRed python-redis-py
Use containers’ --name as DNS hostnames: you connect to some-redis:6379 to reach the container. (In Docker Compose the name of the service block works too.)
Make the locations of external services configurable, most likely using an environment variable. In your Python code you can connect to
redis.Redis(host=os.environ.get("REDIS_HOST", "localhost"),
port=int(os.environ.get("REDIS_PORT", "6379"))
docker run --rm -it \
--name py-red \
--net foo \
-e REDIS_HOST=some-redis \
python-redis-py
Don’t:
docker inspect anything to find the container-private IP addresses. Between containers you can always use hostnames as described above. The container-private IP addresses are unreachable from other hosts, and may even be unreachable from the same hosts on some platforms.
Use localhost in Docker for anything, expect the specific case of connecting from a browser or other process running directly on the host (not in a container) to a port you’ve published with docker run -p on the same host. (It generally means “this container”.)
Hard-code host names in your code like this; it makes it hard to run the service in a different environment. (For databases in particular it’s not uncommon to run them outside of Docker or even in a hosted cloud service.)
Use --link, it’s outdated and unnecessary.

Why i cannot start service web?

Help me with this error please.
ERROR: for web Cannot start service web: driver failed programming
external connectivity on endpoint semestral_dj01
(335d0ad4599512f3228b4ed0bd1bfed96f54af57cff4a553d88635f80ac2e26c):
Bind for 0.0.0.0:8000 failed: port is already allocated ERROR:
Encountered errors while bringing up the project.
Go to Terminal and run command:
lsof -i:8000
Where 8000 is the port number.
The result will be like:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
Python 123456 user ab type 123 000 TCP 0.0.0.0:8000
Now run command in terminal:
kill -9 <PID>
like
kill -9 123456
Then again run your server and the issue will be resolved.
The way i resolved this was by stopping the containers in execution and executing the one I wanted to start.
Use this command into your CMD for stop containers:
docker stop $(docker ps -a -q)
In the case you may want to delete them use this:
docker rm $(docker ps -a -q)
This happen to me time to time in my dev environment. Usually I have to restart docker service to get it working.
I encountered a very similar error. In my case, I had recently upgraded the native nginx version on the Linux box. After the upgrade, nginx automatically started (I had not noticed). When I deployed a docker image with nginx, the 2 nginx instances were competing for the same port (native and docker).
I saw it with:
> sudo netstat -nl -p tcp | grep 443
tcp 0a 0 0.0.0.0:443 0.0.0.0:* LISTEN #####/nginx: master
tcp6 0 0 :::443 :::* LISTEN #####/nginx: master
It was a bit confusing since I was trying to get nginx to run, and it said nginx was using the port. After I had typed docker-compose down, I realized nginx was still using the port, even though the nginx container was destroyed. That made me realize that the native nginx had started up again, even though I didn't manually start it.
My error message:
Cannot start service <webserver>: driver failed programming external connectivity on endpoint <server_instance>_webserver (...<guid>...): Error starting userland proxy: listen tcp 0.0.0.0:443: bind: address already in use

Connection Refused Docker Run

I'm getting a connection refused after building my Docker image and running docker run -t imageName
Inside the container my python script is making web requests (external API call) and then communicating over localhost:5000 to a logstash socket.
My dockerfile is really simple:
FROM ubuntu:14.04
RUN apt-get update -y
RUN apt-get install -y nginx git python-setuptools python-dev
RUN easy_install pip
#Install app dependencies
RUN pip install requests configparser
EXPOSE 80
EXPOSE 5000
#Add project directory
ADD . /usr/local/scripts/
#Set default working directory
WORKDIR /usr/local/scripts
CMD ["python", "logs.py"]
However, I get a [ERROR] Connection refused message when I try to run this. It's not immediately obvious to me what I'm doing wrong here - I believe I'm opening 80 and 5000 to the outside world? Is this incorrect? Thanks.
Regarding EXPOSE:
Each container you run has its own network interface. Doing EXPOSE 5000 tell docker to link a port 5000 from container-network-interface to a random port in your host machine (see it with docker ps), as long as you tell docker to do it when you docker run with -P.
Regarding logstash.
If your logstash is installed in your host, or in another container, it means that logstash is not in the "localhost" of the container (remember that each container has its own network interface, each one has its own localhost). So you need to point to logstash properly.
How?
Method 1:
Don't give container its own iterface, so it has the same localhost as your machine:
docker run --net=host ...
Method 2:
If you are using docker-compose, use the docker network linking. i.e:
services:
py_app:
...
links:
- logstash
logstash:
image: .../logstash..
So point as this: logstash:5000 (docker will resolve that name to the internal IP corresponding to logstash)
Method 3:
If logstash listen in your localhost:5000 (from your host), you can point to it as this: 172.17.0.1:5000 from inside your container (the 172.17.0.1 is the host fixed IP, but this option is less elegant, arguably)

Categories