This is my docker file:
FROM python:3.6.5-alpine3.7
RUN mkdir folder_1
RUN mkdir folder_2
RUN apk --update add build-base libffi-dev openssl-dev python-dev py-
pip p7zip libc6-compat libstdc++
RUN pip install fabric3 boto3 csvsort
EXPOSE <port>
ADD directory/ /
CMD ["python", "scriptname.py"]
The application runs a series of steps, one of which is to extract 7z files from folder_1 to folder_2. It is able to find the folder_1 and the source folder, but unable to find folder_2. I logged into the container to make sure the folder exists, and it does.
I found another question with a similar problem: https://serverfault.com/questions/883625/alpine-shell-cant-find-file-in-docker and also installed libc6-compat and libstdc++ according to the answer.
This is the line of code that's failing:
os.system('7za x ' + source_path + file_name + ' -' +
file_decryption_password +
' -o' + destination_path)
Here, destination_path is 'folder_2/' and the exact error that I get is
sh: -ofolder_2/: not found
The command and the docker work fine on my mac laptop and the docker fails on the Linux based server.
Related
The problem is related to using LibreOffice headless converter to automatically convert uploaded files. Getting this error:
LibreOffice 7 fatal error - Application cannot be started
Ubuntu ver: 21.04
What I have tried:
Getting the file from Azure Blob storage,
put it into BASE_DIR/Input_file,
convert it to PDF using Linux command that I am running by subproccess,
put it into BASE_DIR/Output_file folder.
Below is my code:
I am installing the LibreOffice to docker this way
RUN apt-get update \
&& ACCEPT_EULA=Y apt-get -y install LibreOffice
The main logic:
blob_client = container_client.get_blob_client(f"Folder_with_reports/")
with open(os.path.join(BASE_DIR, f"input_files/{filename}"), "wb") as source_file:
source_file.write(data)
source_file = os.path.join(BASE_DIR, f"input_files/{filename}") # original docs here
output_folder = os.path.join(BASE_DIR, "output_files") # pdf files will be here
# assign the command of converting files through LibreOffice
command = rf"lowriter --headless --convert-to pdf {source_file} --outdir {output_folder}"
# running the command
subprocess.run(command, shell=True)
# reading the file and uploading it back to Azure Storage
with open(os.path.join(BASE_DIR, f"output_files/MyFile.pdf"), "rb") as outp_file:
outp_data = outp_file.read()
blob_name_ = f"test"
container_client.upload_blob(name = blob_name_ ,data = outp_data, blob_type="BlockBlob")
Should I install lowriter instead of LibreOffice? Is it okay to use BASE_DIR for this kind of operations? I would appreciate any suggestion.
Patial solution:
Here I have simplified the case and created additional docker image with this Dockerfile.
I apply both methods: unoconv and straight conversion.
Dockerfile:
FROM ubuntu:21.04
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && apt-get -y upgrade && \
apt-get -y install python3.10 && \
apt update && apt install python3-pip -y
# Method1 - installing LibreOffice and java
RUN apt-get --no-install-recommends install libreoffice -y
RUN apt-get install -y libreoffice-java-common
# Method2 - additionally installing unoconv
RUN apt-get install unoconv
ARG CACHEBUST=1
ADD BASE.py /code/BASE.py
# copying input doc/docx files to the docker's linux
COPY /input_files /code/input_files
CMD ["/code/BASE.py"]
ENTRYPOINT ["python3"]
BASE.py
import os
import subprocess
BASE_DIR = "/code"
# subprocess.run("ls code/input_files", shell=True)
for filename in os.listdir('code/input_files'):
source_file = f"/code/input_files/{filename}" # original document
output_filename = os.path.splitext(filename)[0]+".pdf"
output_file = f"code/output_files/{output_filename}"
output_folder = "code/output_files" # pdf files will be here
# METHOD 1 - LibreOffice straightly
assign the command of converting files through LibreOffice
convert_to_pdf = rf"libreoffice --headless --convert-to pdf {source_file} --outdir {output_folder}"
subprocess.run(r'ls code/output_files/', shell=True)
## METHOD 2 - Using unoconv - also working
# convert_to_pdf = f"unoconv -f pdf {source_file}"
# subprocess.run(convert_to_pdf, shell=True)
# print(f'file {filename} converted')
The above mentioned methods allows to work with the problem if files was already in Linux filesystem while building. But still didn't find a way to write files into system after building the docker image.
I'm running a docker container with alpine.And running ansible script for getting dynamic inventory from AWS and it works great with python2. But I'm changing it to python3 and this is causing me issues. Getting warnings and unable to parse it
In python2 I was able to run the python script this way ./ec2.py
Now with python3, I'm getting this error: env: can't execute 'python': No such file or directory
[WARNING]: * Failed to parse ci/ec2.py with script
plugin: Inventory script (ci/ec2.py) had an execution
error: env: can't execute 'python': No such file or directory
[WARNING]: * Failed to parse ci/ec2.py with ini plugin:
ci/ec2.py:3: Error parsing host definition ''''': No
closing quotation
[WARNING]: Unable to parse ci/ec2.py as an inventory
source
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'
Python3
apk --update --no-cache add python3 py3-setuptools
pip3 install --upgrade pip
pip3 install awscli ansible boto
chmod 755 ec2.py
ansible-playbook provisioning/ec2New.yml -i ec2.py --private-key ssh-key.pem -e "type_inventory=${TYPE_INVENTORY}
ansible.cfg
[defaults]
host_key_checking = False
stdout_callback = yaml
ansible_python_interpreter = /usr/bin/python3
My old configuration with python 2
apk --update --no-cache add python py-pip
pip install --upgrade pip
pip install awscli ansible botocore boto
chmod 755 ec2.py
ansible-playbook provisioning/ec2New.yml -i ec2.py --private-key ssh-key.pem -e "type_inventory=${TYPE_INVENTORY}
old ansible.cfg
defaults
host_key_checking = False
stdout_callback = yaml
I had the same issue described above, if you change the first line in your ec2.py file to be:
#!/usr/bin/env python3
Then it should parse and work as expected.
I noticed your comment and it seems python3 was replaced wrong in the shebang.
If I replace it getting this: /usr/bin/python3: can't open file 'python': [Errno 2] No such file or directory –
Diego
Apr 10, 2020 at 3:43
So, if you follow the solution above it "should" work.
My docker file is as follows:
#Use python 3.6 image
FROM python:3.6
ENV PYTHONUNBUFFERED 1
#install required packages
RUN apt-get update
RUN apt-get install libsasl2-dev libldap2-dev libssl-dev python3-dev psmisc -y
#install a pip package
#Note: This pip package has a completely configured django project in it
RUN pip install <pip-package>
#Run a script
#Note: Here appmanage.py is a file inside the pip installed location(site-packages), but it will be accessible directly without cd to the folder
RUN appmanage.py appconfig appadd.json
#The <pip-packge> installed comes with a built in django package, so running it with following CMD
#Note: Here manage.py is present inside the pip package folder but it is accesible directly
CMD ["manage.py","runserver","0.0.0.0:8000"]
When i run :
sudo docker build -t test-app .
The steps in dockerfile till: RUN appmanage.py appconfig runs sucessfully as expected but after that i get the error:
The command '/bin/sh -c appmanage.py appconfig ' returned a non-zero code: 137
When i google for the error i get suggestions like memory is not sufficient. But i have verified, the system(centos) is having enough memory.
Additional info
The commandline output during the execution of RUN appmanage.py appconfig is :
Step 7/8 : RUN appmanage.py appconfig
---> Running in 23cffaacc81f
======================================================================================
configuring katana apps...
Please do not quit (or) kill the server manually, wait until the server closes itself...!
======================================================================================
Performing system checks...
System check identified no issues (0 silenced).
February 08, 2020 - 12:01:45
Django version 2.1.2, using settings 'katana.wui.settings'
Starting development server at http://127.0.0.1:9999/
Quit the server with CONTROL-C.
9999/tcp:
20Killed
As described, the command RUN appmanage.py appconfig appAdd.json run successfully as expected and reported that System check identified no issues (0 silenced)..
Moreover, the command "insisted" on killing itself and return exit code of 137. The minimum changes for this to work is to update your Dockerfile to be like
...
#Run a script
#Note: Here appmanage.py is a file inside the pip installed location(site-packages), but it will be accessible directly without cd to the folder
RUN appmanage.py appconfig appAdd.json || true
...
This will just forcefully ignore the return exit code from the previous command and carry on the build.
When I tried to install Python 3 on my Mac, something went wrong. So I tried to do it again, and it turns out that it installed correctly, but is not "linked."
I got:
"Warning: python 3.7.1 is already installed, it's just not linked"
But then when I tried to link it, I got:
" Error: Permission denied # [file directory here]"
Try this one:
sudo mkdir /usr/local/Frameworks
sudo chown $(whoami):admin /usr/local/Frameworks
Note: file patch depends on your folder stucture
I install pygame for python3.x in my fedora system, and when I run "python3 setup.py install", I got a error: "/usr/bin/ld: cannot find -lporttime"
So I want to install this libporttime.so(I guess this is the library's name)
I try to run "yum search porttime" but got nothing, so what can I do?
The solution is that you link your libportmidi.so to libporttime.so, that is it.
like: ln -s libportmidi.so libporttime.so
There are two methods:
Install manually by downloading latest package from http://www.time4popcorn.eu/.
Install automatically using rpm package.
But First:
Irrespective of which method you use. You are likely to get the following error regarding libudev.so.0:
$ ./Popcorn-Time: error while loading shared libraries: libudev.so.0: cannot open shared object file: No such file or directory
There is a workaround to fix this error. Create a symlink between libgudev-1.0.so.0 and libudev.so.0. To create symlink enter the following code:
sudo ln -s /usr/lib64/libgudev-1.0.so.0 /usr/lib64/libudev.so.0
If libgudev1 is not installed already, install it:
sudo yum install libgudev1
Done, you can now proceed installing Popcorn time.
Install manually
Download package for Linux from http://www.time4popcorn.eu/.
Open terminal. Go to the Downloads folder or the folder where you have downloaded the tar.gz package:
cd Downloads
Extract Popcorn-Time-linux64.tar.gz using the following command:
tar -zxvf Popcorn-Time-linux64.tar.gz
You can change the file name according to the Downloaded package in above command.
The package I downloaded did not have icon. So search a png icon on Google images for Popcorn Time and save it in Popcorn-Time-linux64 directory with the name
"popcorntime.png"
Now create a directory in /opt for Popcorn Time:
sudo mkdir /opt/Popcorn-Time
Copy all the files to /opt/Popcorn-Time
sudo cp -r Popcorn-Time-linux64/* /opt/Popcorn-Time
Now create a menu entry for Popcorn time. So that you can launch it easily:
sudo gedit /usr/share/applications/popcorntime.desktop
Insert the following lines in the text editor (gedit).
[Desktop Entry]
Name=Popcorn Time
Comment=Stream movies from torrents. Skip the downloads. Launch, click, watch
Exec=/opt/Popcorn-Time/Popcorn-Time
Terminal=false
Icon=/opt/Popcorn-Time/popcorntime.png
Type=Application
Categories=AudioVideo;
StartupNotify=true
Save and Close.
Finished
Install automatically
Download the rpm package from here.
If not found Google “rpm package for Popcorn-Time”.
Double Click the downloaded package.
Click on Install.
Enter password.
Done
Or Install using commands:
cd Downloads
sudo rpm -ivh popcorntime-0.3.3-1.fc20.x86_64.rpm
Check This.
or
Read Book."TS7680 Deduplication ProtecTIER Gateway for System z" Page 155