I have a program which uses Yagmail and the keyring package to safley store email credentials. When I run this script in atom.io and idle it works.
However, after I packaged it with pyinstaller it is giving me this message:
RuntimeError: No recommended backend was available. Install a recommended 3rd party backend package; or, install the keyrings.alt package if you want to use the non-recommended backends. See https://pypi.org/project/keyring for details.
In my program I have
import keyring
I also have gone and installed keyring.alt
Since i cant add comments, i am adding my inputs in answer block. hope this helps.
I also had similar issue, where i used a keyring module to store passwordin my python script and packaged it using pyinstaller. My script ran perfect when i run it directly. But when i try to run the python exe , i got same error as below
"RuntimeError: No recommended backend was available. Install a recommended 3rd party backend package; or, install the keyrings.alt package if you want to use the non-recommended backends. See https://pypi.org/project/keyring for details."
I googled about this error and found below link (this may not be directly related but there someone gave a workaround). I added the workaround as suggested in the link (you have get which keyring backend you are using as well) and it worked.
Link: https://github.com/jaraco/keyring/issues/359
Code to find which keyring backend you are using
from keyring import get_keyring
get_keyring()
As suggested in the above like you can add the block somewhere in your script and then exe file will run perfectly.
Here's what I did, based on #Rena76's answer:
To get the default 'method' used to store the password, I imported get_keyring from keyring and executed the said function.
from keyring import get_keyring
print("Keyring method: " + str(get_keyring()))
The retrieved method was 'keyring.backends.chainer.ChainerBackend', which works fine on the script but not when exported to an .exe file. So I set 'keyring.backends.Windows.WinVaultKeyring' as my method, given that I'm using Windows.
keyring.core.set_keyring(keyring.core.load_keyring('keyring.backends.Windows.WinVaultKeyring'))
Finally, so that I'm able to save the credentials on Windows Vault, I'll import win32 libraries.
import win32api, win32, win32timezone
Now I can successfully perform Keyring functions, such as:
keyring.set_password(service_name='<service>', username='<username>', password='<password>')
Related
I am trying to get clear concept on how to get the Erwin generated DDL objects with python ? I am aware Erwin API needs to be used. What i am looking if what Python Module and what API needs to used and how to use them ? I would be thankful for some example !
Here is a start:
import win32com.client
ERwin = win32com.client.Dispatch("erwin9.SCAPI")
I haven't been able to browse the scapi dll so what I know is from trial and error. Erwin publishes VB code that works, but it is not straightforward to convert.
Install pywin32 (run the below from pip folder e.g. c:\Program Files\Python37\Scripts)
python -m pip install pywin32
python pywin32_postinstall.py -install
Sample script to extract DDL using Erwin's Forward Engineer functionality (change paths accordingly):
import win32com.client
api = win32com.client.Dispatch("erwin9.SCAPI")
unit = api.PersistenceUnits.Add("c:/models/data_model.erwin", "RDO=Yes")
unit.FEModel_DDL("c:/scripts/ddl_script.sql")
For the above to work, Erwin application should be running (probably).
Pip has a configuration which is typically in ~/.pip/pip.conf on Linux, %APPDATA%\pip\pip.ini on Windows, and possibly other locations on virtual environments.
I could write some code to locate Pip's config file and then parse it using an ini-file parser (included with Python), however it occurs to me that this code must already exist within Pip. Pip surely must have a mechanism to locate and parse it's own configuration file.
I'd like to be able to access that configuration via Pip's API. In particular I'm trying to get hold of the index URL that Pip is using (along with any credentials which may be embedding). That will allow my service to guarantee that it's going to hit the same repository that Pip used to install from.
Is there an easy way to access this information?
The objective here is to access Pip's configuration information without having to re-implement the code which searches for Pip's config file.
There's no good way to do this, but this gets you somewhat directly to the user's config file. This won't work for site-configs.
import os
import pip.appdirs
import pip.locations
os.path.join(pip.appdirs.user_config_dir("pip"), pip.locations.config_basename)
And a way that gets all the config file locations:
>>> from pip.baseparser import ConfigOptionParser
>>> ConfigOptionParser(name="foo").get_config_files()
['C:\\ProgramData\\pip\\pip.ini', 'C:\\Users\\salimfadhley\\pip\\pip.ini', 'C:\\Users\\salimfadhley\\AppData\\Roaming\\pip\\pip.ini']
>>>
I'm writing a simple IronWorker in Python to do some work with the AWS API.
To do so I want to use the boto library which is distributed via PyPi repository. The boto library is not installed by default in the IronWorker runtime environment.
How can I bundle the boto library dependancy with my IronWorker code?
Ideally I'm hoping I can use something like the gem dependancy bundling available for Ruby IronWorkers - i.e in myRuby.worker specify
gemfile '../Gemfile', 'common', 'worker' # merges gems from common and worker groups
In the Python Loggly sample, I see that the hoover library is used:
#here we have to include hoover library with worker.
hoover_dir = os.path.dirname(hoover.__file__)
shutil.copytree(hoover_dir, worker_dir + '/loggly') #copy it to worker directory
However, I can't see where/how you specify which hoover library version you want, or where to download it from.
What is the official/correct way to use 3rd party libraries in Python IronWorkers?
Newer iron_worker version has native support of pip command.
So, you need:
runtime "python"
exec "something.py"
pip "boto"
pip "someotherpip"
full_remote_build true
[edit]We've worked on our toolset a bit since this answer was written and accepted. The answer from my colleague below is the recommended course moving forward.[/edit]
I wrote the Python client library for IronWorker. I'm also employed by Iron.io.
If you're using the Python client library, the easiest (and recommended) way to do this is to just copy over the library's installed folder, and include it when uploading the package. That's what the Python Loggly sample is doing above. As you said, that doesn't specify a version or where to download the library from, because it doesn't care. It just takes the one installed on your system and uses it. Whatever you get when you enter "import boto" on your local machine is what would be uploaded.
The other option is using our CLI to upload your worker, with a .worker file.
To do this, here's what you'd need to do:
Create a botoworker.worker file:
runtime "binary"
build 'pip install --install-option="--prefix=`pwd`/pips" boto'
file 'botoworker.py'
exec "botoworker.sh"
That second line is the pip command that will be run to install the dependency. You can modify it like you would any pip command run from the command line. It's going to execute that command on the worker during the "build" phase, so it's only executed once instead of every time you run a task.
The third line should be changed to the Python file you want to run--it's your Python worker file. Here's the one we used to test this:
import boto
If you save that as botoworker.py, the above should work without any modification. :)
The fourth line is a shell script that's going to actually run your worker. I've included the one we used below. Just save it as botoworker.sh, and you won't have to worry about modifying the .worker file above.
PYTHONPATH="$HOME/pips/lib/python2.7/site-packages:$PYTHONPATH" python botoworker.py "$#"
You'll notice it refers to your Python file--if you don't name your Python file botoworker.py, remember to change it here, too. All this does is set your PYTHONPATH to include the installed library, and then runs your Python file.
To upload this, just make sure you have the CLI installed (gem install iron_worker_ng, making sure your Ruby version is 1.9.3 or higher) and then run "iron_worker upload botoworker" in your shell, from the same directory your botoworker.worker file is in.
Hope this helps!
I'm trying to install and configure pyIpopt. Ipopt is already installed and the examples run fine.
From the shell, when I do import pyIpopt, I get the error:
ImportError: /***PATH***/libipopt.so.1: undefined symbol: MPI_Init
The FAQ section of the pyIpopt git project has this to offer for these kinds of errors:
Do a Google search to find the library file, and add
-lWhateverLibrary in the makefile of pyipopt.
I've googled and found this: http://www.mcs.anl.gov/research/projects/mpi/www/www3/MPI_Init.html.
I don't know how to get the library or add it to the makefile... Any assistance would be much appreciated!
Just had a similar problem on ubuntu.
Using libmumps-seq worked for me:
installed libmumps-seq-4.9.2 (just with apt-get, along side the ordinary libmumps)
in setup.py changed in the libraries list argument 'coinmumps' to 'dmumps_seq-4.9.2'
rebuilt and installed.
If I understand it correctly, the default mumps is distributed (using MPI lib which can be a world of pain), and all i needed is the sequential one, which mumps-seq provides.
How to get module hgcr_ui in RhodeCode ? I have run it on my Windows box and I get an error like this:
failed to import extension hgcr-gui-qt: No module named hgcr_ui
however i can't access my repository too. I have downloaded https://bitbucket.org/glimchb/hgcr-gui too, but i still get an error like it.
RhodeCode uses it's internal equivalent of .hgrc files in database. The table rhodecode_ui, has one available extension (largefiles) you could add similar row with hgcr_ui to that table manually and that extension should now work with RhodeCode.
Make sure your Mercurial install is up to date. RhodeCode recommends a Python sandbox for itself, and that may have a different version.