After installing airflow package in AWS EC2 instance , i am trying to start the airflow webserver Its showing permission denied issue , i am not getting which file or folder its trying to create/modify to get this error.
[root#ip-172-31-62-1 airflow]# /usr/local/bin/airflow webserver -p 8080
[2017-06-13 04:24:35,692] {__init__.py:57} INFO - Using executor SequentialExecutor
/usr/local/lib/python2.7/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.cache is deprecated, use flask_cache instead.
.format(x=modname), ExtDeprecationWarning
[2017-06-13 04:24:36,093] [4053] {models.py:167} INFO - Filling up the DagBag from /home/ec2-user/airflow/dags
Running the Gunicorn Server with:
Workers: 4 sync
Host: 0.0.0.0:8080
Timeout: 120
Logfiles: - -
=================================================================
Traceback (most recent call last):
File "/usr/local/bin/airflow", line 28, in <module>
args.func(args)
File "/usr/local/lib/python2.7/site-packages/airflow/bin/cli.py", line 791, in webserver
gunicorn_master_proc = subprocess.Popen(run_args)
File "/usr/lib64/python2.7/subprocess.py", line 711, in __init__
errread, errwrite)
File "/usr/lib64/python2.7/subprocess.py", line 1343, in _execute_child
raise child_exception
OSError: [Errno 13] Permission denied
------------------------------------
The Value of run_args in above error message is
['gunicorn', '-w', '4', '-k', 'sync', '-t', '120', '-b', '0.0.0.0:8080', '-n', 'airflow-webserver', '-p', '/home/ec2-user/airflow/airflow-webserver.pid', '-c', 'airflow.www.gunicorn_config', '--access-logfile', '-', '--error-logfile', '-', 'airflow.www.app:cached_app()']
I had this same issue. It got resolved when I installed the whole setup in sudo mode. Please find the commands I used:
sudo apt-get update && sudo apt-get -y upgrade
sudo apt-get install python-pip
sudo -H pip install airflow
sudo airflow initdb
sudo airflow webserver -p 8080
I had this same issue and the existing answer wouldn't work for me because the user didn't have sudo permissions. What worked for me was adding bin to the path:
export PATH=$PATH:/usr/local/bin
Or in my case:
export PATH=$PATH:~/.local/bin
Then I could just run:
airflow webserver -p 8080
Instead of:
.local/bin/airflow webserver -p 8080
Related
I am trying to deploy a Cloud run application containing a private python package.
The code from the cloudrun is hosted on Github, and when I push code, it triggers a cloudbuild that builds the Docker, pushes it to the Container registry and creates a cloudrun with the image.
Unfortunately, in the docker build stage. The Docker cannot access the private python package that is available on the artifact registry.
I have sucessfully used that package in a cloud function in the past, so I am sure the package works. I have also given the same permissions to the cloudbuild that builds the docker to cloudbuilds that buils functions using that package, and they work.
I have created this issue in the past here, and had possible solutions using the Json Key file of a service account with the owner permission on the project following that tutorial from the Google Cloud documentation. But I would like to avoid using a key, as the key should not be saved on Github. I am sure this is a permission issue, but I could not figure it out.
cloudbuild.yaml
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/${_PROJECT}/${_SERVICE_NAME}:$SHORT_SHA', '--network=cloudbuild', '.', '--progress=plain']
Dockerfile
FROM python:3.8.6-slim-buster
ENV APP_PATH=/usr/src/app
ENV PORT=8080
# Copy requirements.txt to the docker image and install packages
RUN apt-get update && apt-get install -y cython
RUN pip install --upgrade pip
# Set the WORKDIR to be the folder
RUN mkdir -p $APP_PATH
COPY / $APP_PATH
WORKDIR $APP_PATH
RUN pip install -r requirements.txt --no-color
RUN pip install --extra-index-url https://us-west1-python.pkg.dev/my-project/my-package/simple/ my-package==0.2.3 # This line is where the bug occurs
# Expose port
EXPOSE $PORT
# Use gunicorn as the entrypoint
CMD exec gunicorn --bind 0.0.0.0:8080 app:app
The permissions I added are:
cloudbuild default service account (project-number#cloudbuild.gserviceaccount.com): Artifact Registry Reader
service account running the cloudbuild : Artifact Registry Reader
service account running the app: Artifact Registry Reader
The cloudbuild error:
Step 10/12 : RUN pip install --extra-index-url https://us-west1-python.pkg.dev/my-project/my-package/simple/ my-package==0.2.3
---> Running in b2ead00ccdf4
Looking in indexes: https://pypi.org/simple, https://us-west1-python.pkg.dev/muse-speech-devops/gcp-utils/simple/
User for us-west1-python.pkg.dev: [91mERROR: Exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/pip/_internal/cli/base_command.py", line 167, in exc_logging_wrapper
status = run_func(*args)
File "/usr/local/lib/python3.8/site-packages/pip/_internal/cli/req_command.py", line 205, in wrapper
return func(self, options, args)
File "/usr/local/lib/python3.8/site-packages/pip/_internal/commands/install.py", line 340, in run
requirement_set = resolver.resolve(
File "/usr/local/lib/python3.8/site-packages/pip/_internal/resolution/resolvelib/resolver.py", line 94, in resolve
result = self._result = resolver.resolve(
File "/usr/local/lib/python3.8/site-packages/pip/_vendor/resolvelib/resolvers.py", line 481, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File "/usr/local/lib/python3.8/site-packages/pip/_vendor/resolvelib/resolvers.py", line 348, in resolve
self._add_to_criteria(self.state.criteria, r, parent=None)
File "/usr/local/lib/python3.8/site-packages/pip/_vendor/resolvelib/resolvers.py", line 172, in _add_to_criteria
if not criterion.candidates:
File "/usr/local/lib/python3.8/site-packages/pip/_vendor/resolvelib/structs.py", line 151, in __bool__
I have a docker file that generates a flask app using gunicorn. For my purposes I need to use https so I'm setting up ssl using openssl. However I keep running into this error:
[2020-02-24 17:01:18 +0000] [1] [INFO] Starting gunicorn 20.0.4
Traceback (most recent call last):
File "/usr/local/bin/gunicorn", line 11, in <module>
sys.exit(run())
File "/usr/local/lib/python3.6/dist-packages/gunicorn/app/wsgiapp.py", line 58, in run
WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run()
File "/usr/local/lib/python3.6/dist-packages/gunicorn/app/base.py", line 228, in run
super().run()
File "/usr/local/lib/python3.6/dist-packages/gunicorn/app/base.py", line 72, in run
Arbiter(self).run()
File "/usr/local/lib/python3.6/dist-packages/gunicorn/arbiter.py", line 198, in run
self.start()
File "/usr/local/lib/python3.6/dist-packages/gunicorn/arbiter.py", line 155, in start
self.LISTENERS = sock.create_sockets(self.cfg, self.log, fds)
File "/usr/local/lib/python3.6/dist-packages/gunicorn/sock.py", line 162, in create_sockets
raise ValueError('certfile "%s" does not exist' % conf.certfile)
ValueError: certfile "server.crt" does not exist
Here is my Dockerfile:
FROM ubuntu:latest
RUN apt-get update && apt-get install python3-pip -y && \
apt-get install python3-dev openssl
RUN openssl req -nodes -new -x509 -keyout server.key -out server.cert -subj "/C=US/ST=MD/L=Columbia/O=Example/OU=ExampleOU/CN=example.com/emailAddress=seanbrhn3#gmail.com"
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip3 install -r requirements.txt
COPY . /app
ENV PORT 8080
CMD ["gunicorn", "--certfile=server.crt","--keyfile=server.key","app:app", "--config=config.py"]
All help is most appreciated!
I solved the problem instead of creating the cert and key in the dockerfile I used the copy command to take my cert and keys from my local directory and put it in the dockerfile. My problem was just my lack of knowledge of docker
This probably won't be the case for most people, but I wanted to put this out there just in case. I had this issue and realized that I had forgotten that I was mounting a volume in my docker run command.
So even though I was downloading files in my Dockerfile (during docker build), as soon as I did docker run with the volume getting mounted (the volume did not contain those downloaded files) all the downloaded files would get deleted.
I have installed elastalert on Centos 7.6 and while starting the elastalert receiving the following error.
[root#e2e-27-36 elastalert]# python -m elastalert.elastalert --verbose --rule example_rules/example_frequency.yaml
Traceback (most recent call last):
File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/root/elastalert/elastalert/elastalert.py", line 29, in <module>
from . import kibana
File "elastalert/kibana.py", line 4, in <module>
import urllib.error
ImportError: No module named error
How should I go about fixing this?
You can try to check if urllib3 is installed by running pip freeze or try to reinstall it with pip install urllib3.
You maybe need to correctly activate your environment variable like this : source [env]/bin/activate.
Setup conda environment
conda create -n elastalert python=3.6 anaconda
Activate conda env
conda activate elastalert
Install all the requirements
pip install -r requirements-dev.txt
pip install -r requirements.txt
I have found my fix by own.
1.On python2.7 the issue still persist
2.Install python3.6 version to fix the issue.
yum install python3 python3-devel python3-urllib3
3.Run the elastalert command
python3 -m elastalert.elastalert --config /root/elastalert/config.yaml --verbose --rule /root/elastalert/example_rules/example_frequency.yaml
4.If you received issue with the modules (ModuleNotFoundError: No module named 'pytz')
5.Install the modules as per the requirement.
pip3 install -r /root/elastalert/requirements.txt
6.Let's run the command "python3 -m elastalert.elastalert --config /root/elastalert/config.yaml --verbose --rule /root/elastalert/example_rules/example_frequency.yaml" and got error
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='elasticsearch.example.com', port=9200): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno -2] Name or service not known',))
7.Above error due to not valid hostname on config.yaml file. Edit the config.yaml file and change the hostname to server hostname at es.hosts field
Make sure you had an entry for the same on the /etc/hosts file.
8.Ok the issue got fixed and run the command "python3 -m elastalert.elastalert --config /root/elastalert/config.yaml --verbose --rule /root/elastalert/example_rules/example_frequency.yaml" and one more error
pkg_resources.DistributionNotFound: The 'jira>=2.0.0'
9.We need to install the jira by using below command
pip3 install jira==2.0.0
10.Now let's run the command "python3 -m elastalert.elastalert --config /root/elastalert/config.yaml --verbose --rule /root/elastalert/example_rules/example_frequency.yaml" and again another error OMG.
elasticsearch.exceptions.TransportError: TransportError(429, 'circuit_breaking_exception', '[parent] Data too large, data for [] would be [994793504/948.7mb], which is larger than the limit of [986061209/940.3mb], real usage: [994793056/948.7mb], new bytes reserved: [448/448b]')
11.You need to fix the same by changing the heap value on following /etc/elasticsearch/jvm.options
Xms-1g to Xms-2g
Xmx-1g to Xms-2g
and restart elasticsearch service "service elasticsearch restart"
12.Everything set again run the command "python3 -m elastalert.elastalert --config /root/elastalert/config.yaml --verbose --rule /root/elastalert/example_rules/example_frequency.yaml" and ended up receiving another error.
ERROR:root:Error finding recent pending alerts: NotFoundError(404, 'index_not_found_exception', 'no such index [elastalert_status]', elastalert_status, index_or_alias) {'query': {'bool': {'must': {'query_string': {'query': '!exists:aggregate_id AND alert_sent:false'}}, 'filter': {'range': {'alert_time': {'from': '2019-12-04T19:45:09.635478Z', 'to': '2019-12-06T19:45:09.635529Z'}}}}}, 'sort': {'alert_time': {'order': 'asc'}}}
13.Fix the issue by running the below command
elastalert-create-index
14.Finally everything done and run the below command
python3 -m elastalert.elastalert --config /root/elastalert/config.yaml --verbose --rule /root/elastalert/example_rules/example_frequency.yaml
Now cancelled the command and ran the same on background
python3 -m elastalert.elastalert --config /root/elastalert/config.yaml --verbose --rule /root/elastalert/example_rules/example_frequency.yaml &
I'm trying to install pycurl via:
sudo pip install pycurl
It downloaded fine, but when when it runs setup.py I get the following traceback:
Downloading/unpacking pycurl
Running setup.py egg_info for package pycurl
Traceback (most recent call last):
File "<string>", line 16, in <module>
File "/tmp/pip-build-root/pycurl/setup.py", line 563, in <module>
ext = get_extension()
File "/tmp/pip-build-root/pycurl/setup.py", line 368, in get_extension
ext_config = ExtensionConfiguration()
File "/tmp/pip-build-root/pycurl/setup.py", line 65, in __init__
self.configure()
File "/tmp/pip-build-root/pycurl/setup.py", line 100, in configure_unix
raise ConfigurationError(msg)
__main__.ConfigurationError: Could not run curl-config: [Errno 2] No such file or directory
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 16, in <module>
File "/tmp/pip-build-root/pycurl/setup.py", line 563, in <module>
ext = get_extension()
File "/tmp/pip-build-root/pycurl/setup.py", line 368, in get_extension
ext_config = ExtensionConfiguration()
File "/tmp/pip-build-root/pycurl/setup.py", line 65, in __init__
self.configure()
File "/tmp/pip-build-root/pycurl/setup.py", line 100, in configure_unix
raise ConfigurationError(msg)
__main__.ConfigurationError: Could not run curl-config: [Errno 2] No such file or directory
Any idea why this is happening and how to get around it using Alpine Linux?
Found it. I believe this works.
# Install packages
apk add --no-cache libcurl
# Needed for pycurl
ENV PYCURL_SSL_LIBRARY=openssl
# Install packages only needed for building, install and clean on a single layer
RUN apk add --no-cache --virtual .build-dependencies build-base curl-dev \
&& pip install pycurl \
&& apk del .build-dependencies
I had the same issue building a Tornado app based on python:3.7.2-apline3.9 image. I was able to get past this error used the curl-dev package as noted by pycURL's install instructions
Under the pycURL Install header:
NOTE: You need Python and libcurl installed on your system to use or build pycurl. Some RPM distributions of curl/libcurl do not include everything necessary to build pycurl, in which case you need to install the developer specific RPM which is usually called curl-dev.
Here is the relevant part of the Dockerfile
RUN apk add --no-cache libcurl
RUN apk update \
&& apk add --virtual .build-deps \
curl-dev \
&& pip install -e ./ \
&& apk --purge del .build-deps
If you want to verify the features available through curl I did the following.
docker exec -it <container_name> sh
apk add curl
curl --version
The output of curl --version is similar to
curl 7.64.0 (x86_64-alpine-linux-musl) libcurl/7.64.0 OpenSSL/1.1.1b zlib/1.2.11 libssh2/1.8.1 nghttp2/1.35.1
Release-Date: 2019-02-06
Protocols: dict file ftp ftps gopher http https imap imaps pop3 pop3s rtsp scp sftp smb smbs smtp smtps telnet tftp
Features: AsynchDNS IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP HTTP2 UnixSockets HTTPS-proxy
Specifically for me I was interested in the AsynchDNS being present so I could use Tornado's curl_httpclient
I am trying to run a flask app on my Windows 10 laptop through Bash on Ubunutu on Windows, but I am getting the following error:
None
Traceback (most recent call last):
File "app.py", line 126, in
app.run(debug=True) #Deubug is set to true
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 841, in run
run_simple(host, port, self, **options)
File "/usr/local/lib/python2.7/dist-packages/werkzeug/serving.py", line 720, in run_simple
s.bind((hostname, port))
File "/usr/lib/python2.7/socket.py", line 228, in meth
return getattr(self._sock,name)(*args)
socket.error: [Errno 13] Permission denied
I am not trying to run a restricted port. It should be on port 5000.
Here are the commands I ran:
sudo apt-get install python-pip
sudo pip install Flask
sudo pip install flask-bootstrap
sudo pip install bokeh
python app.py
Whenever Flask was installed, I got back a warning:
The directory '/home/User/.cache/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/home/User/.cache/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Also, running sudo python app.py does not help either.
Any thoughts on what is happening?