VCRedist dependencies with PyInstaller - python

I'm trying to release a simple app with PyInstaller to work on windows7-X64 without any dependencies. But I got some problem with Microsoft VCRedist. In my host PC, I installed VCRedist 2015 and generated executable normally (not standalone). The VCRedist DLL files api-ms-win*.dll was included as expected in the generated directory and it worked fine on the target machine without VCRedist. Then I tried to generate a standalone app but this time when I executed it on target machine I got this error:
The procedure entry point ucrtbase_putch couldnot be located in the dynamic link library api-ms-win-crt-conio-l1-1-0.dll.
I checked the generated Temp folder (_MEI*) and found out that the right DLLs are right there and somehow the executable cannot use them. I created a copy of (_MEI*) folder and put the standalone executable next to it and surprisingly it worked. It seems that some of those DLLs exist in the windows directory of the target machine and it is trying to load them instead of (_MEI*) directory.
I also read the docs but didn't help much with this.

Related

Creating an executable kivy app and exe installer

I created a data-miner GUI for twitter with kivy and am currently having a lot of trouble turning it into an exe. I tried following this video and import glew and sdl2 into my spec but after doing pyinstaller main.spec, my executable still would not open.
Is it because I have more than one files and folders for my program (here is the link to the github repo for my project), if so, how do you deal with that?
In addition, if I manage to success create a working exe, how do I create an exe installer that other people can use to install the executable?
Making an executable from a complex script like yours may become quite frustrating because of its dependencies. But I'm giving you a brief guide about what you need to follow to achieve your goal.
Create your main.spec file with console-mode enabled to see the exact error message for the app. (make sure to remove --noconsole from PyInstaller command or set console=True in spec file). Also use --no-upx in the build command to remove compression from output file (this helps with omitting some DLLs which may cause issues).
You need to make sure that every external module you used can pack correctly. I don't think you get any problem with either Kivy or Tweepy. But if you get any missing import error, try to check the solution for each one by searching the pattern [module] pyinstaller.
Your app has external resources like images, files, etc., which must be added to the packed executable and load properly. I wrote an answer about this here.
If you want a standalone executable, you need to use -F with PyInstaller command, which is more robust than using an installer to gather files in one directory mode.

How to link to .dlls in conda environment using a Python Script running a Windows .exe?

I am using a script populating the Structure-from-Motion software COLMAP with custom features.
This script is usually distributed for linux and I had to do some adaptions in Windows 10.
The script is calling COLMAP via:
cmd = [
str(colmap_path), 'feature_importer',
'--database_path', str(database_path),
'--image_path', str(image_dir),
'--import_path', str(dummy_dir),
'--ImageReader.single_camera',
str(int(single_camera))]
ret = subprocess.call(cmd)
if ret != 0:
logging.warning('Problem with feature_importer, exiting.')
exit(ret)
For colmap_path I linked to the colmap.exe and it gets executed but is missing the .dlls that are stored in a separate folder. The structure of the program is as follows:
C:/COLMAP/bin/colmap.exe (and other *.exe)
C:/COLMAP/lib/*.dll
C:/COLMAP/lib/platforms/qwindows.dll
My attempt was to just copy the .dll files into /anaconda3/envs/my_env but then I am getting the error:
qt.qpa.plugin: Could not find the Qt platform plugin "windows" in ""
This application failed to start because no Qt platform plugin could
be initialized. Reinstalling the application may fix this problem.
So is it possible to directly link to the .dlls and platforms in ret = subprocess.call(cmd)?
I found the solution on a different website. I had to copy the platforms folder containing the qwindows.dll into the bin folder. This works as a solution for many different applications executable with a .exe; subprocess.call was then able to find the qt platform plugin and the script ran without errors.

Issues with pyinstaller and shared library (dll) - The specified module could not be found

I have a python script which I want to convert to a standalone exe file. So,I am using pyinstaller. I have 2 dll files, which are written inc and c++, and are being used in python script using ctypes.
global image_lib
image_lib = ctypes.CDLL('image_lib.dll')
global host_lib
host_lib = ctypes.CDLL('host_lib.dll')
Both the files are in the current directory. Python script runs fine. Now I convert the python script to exe using pyinstaller, and add the dlls to the same folder, it works fine in my computer. But if I copy the same exe and the dlls to a different computer, only 'image_lib.dll' will be found. For some reason the exe is not able to find host_lib.dll which is in the same directly as well.
I also tried printing the path, where it is looking for dll, using application_path = os.getcwd(), and it is the current directory, but it is not able to find the dll in this directory.
I tried to add host_lib.dll in the hidden-import while generating exe using pyinstaller, but no luck.
I don't know what the problem could be, since one the issue is with just one dll and not the other. I really need help me with this.
Thanks a lot!!

Bundle pdflatex to run on an AWS Lambda with a custom AMI image

My goal is to create an Amazon Lambda Function to compile .tex files into .pdf using the pdflatex tool through python.
I've built an EC2 instance using Amazon's AMI and installed pdflatex using yum:
yum install texlive-collection-latex.noarch
This way, I can use the pdflatex and my python code works, compiling my .tex into a .pdf the way I want.
Now, I need to create a .zip file bundle containing the pdflatex tool; latexcodec (a python library I've used, no problem with this one); and my python files: handler (lambda function handler) and worker (which compiles my .tex file).
This bundle is the deployment package needed to upload my code and libraries to Amazon Lambda.
The problem is: pdflatex has a lot of dependencies, and I'd have to gather everything in one place. I've found a script which does that for me:
http://www.metashock.de/2012/11/export-binary-with-lib-dependencies/
I've set my PATH to find the pdflatex binary at the new directory so I can use it and I had an issue: pdflatex couldn't find some dependencies. I was able to fix it by setting an environment variable to the folder where the script moved everything to:
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/home/ec2-user/lambda/local/lib64:/home/ec2-user/lambda/local/usr/lib64"
At this point, I was running pdflatex directly, through bash. But my python script was firing an error when trying to use the pdflatex:
mktexfmt: No such file or directory
I can't find the format file `pdflatex.fmt'!
I was also able to solve this by moving the pdflatex.fmt and texmf.cnf files to my bundle folder and setting some environment variables as well:
export TEXFORMATS=/home/ec2-user/lambda/local/usr/bin
And now, my current problem, the python script keeps throwing the following error:
---! /home/ec2-user/lambda/local/usr/bin/pdflatex.fmt doesn't match pdftex.pool
(Fatal format file error; I'm stymied)
I've found some possible solutions; deleting a .texmf-var folder, which in my case, does not exist; using fmtutil, which I don't have in my AMI image...
1 - Was I missing any environment variable?
2 - Or moving my pdflatex binary and all its dependencies the wrong way?
3 - Is there any correct way to move a binary and all its dependencies so it can be used in other machine (considering the env variables)?
Lambda environment is a container and not a common EC2 Instance. All files in your .zip is deployed in /var/task/ inside the container. By the way, everything is mounted as read-only, except the directory /tmp. So, it's impossible to run a yum, for example.
For you case, I'd recommend you to put the binaries in your zip and invoke it in /var/task/<binary name>. Remember to put a binary compiled statically in a linux compatible with the container's kernel.
samoconnor is doing pretty much exactly what you want in https://github.com/samoconnor/lambdalatex. Note that he sets environment variables in his handler function
os.environ['PATH'] += ":/var/task/texlive/2017/bin/x86_64-linux/"
os.environ['HOME'] = "/tmp/latex/"
os.environ['PERL5LIB'] = "/var/task/texlive/2017/tlpkg/TeXLive/"
that might do the trick for you as-well.

Sharing a common dll/pyd dependencies for several exe in Python

I am successfully using py2exe to produce a Windows binary distribution of several independant Python scripts (using standard Python27.dll + PyQt pyd files + Matplotlib pyd files) producing an executable for each one (app1.exe, app2.exe, ...).
Each of these executables work as expected if kept in the same directory as their dynamic dependencies (these dependencies are exactly the same)
I tried to create a common directory, say CommonDynamicLibrariesPython , which contains all these dll/pyd dependencies, then place each application in a different location and attempt to run it through a batch file after I set the path system variable locally to include the dll/pyd, using something like:
:: run_app1.bat placed in c:\dir_app1
set PYTHONDLL=my_path_to_CommonDynamicLibrariesPython
set PATH=%PYTHONDLL%;%PATH%
start /B app1.exe
Alternately I tried also changing PYTHONPATH from the batch and from the python script itself, but no luck.
Can anyone tell me why isn't this possible ?
In other words, is it possible to run a python executable produced by py2exe when placed outside its original dist folder, without using shortcuts, but using a batch file

Categories