mypy - erroring when there are no Python files - python

Is there a way to stop mypy erroring when it doesn't find any .py files? In essence, this isn't an error. What Im looking to do is hide or silence this one error.
$ poetry run mypy .
There are no .py[i] files in directory '.'
Error: Process completed with exit code 2.
Thanks,

From mypy's source it doesn't look like there is any way to do this from mypy directly. The function that reports the error does provide an argument for ignoring this error, but the argument is only passed from within tests https://github.com/python/mypy/blob/99f4d5af147d364eda1d4b99e79770c171896f13/mypy/find_sources.py#L20-L47.
Without ignoring other errors, as far as I can tell you'd have to submit each path to mypy individually (if you're passing it multiple) and ignore the 2 error code if it reports the error you want to ignore.
Patching mypy would be an another option, but mypy is precompiled through mypyc so you'd have to get rid of the binaries and lose out on a bunch of performance with monkey-patching, or build it yourself with the function changed and install the patched module.

You can install mypy without the binaries, but it's going to be slower.
python3 -m pip install --no-binary mypy -U mypy
Once this is installed, you can create a patch, say mypy-custom.py like so:
import sys
from typing import List, Optional, Sequence
import mypy.main
from mypy.modulefinder import BuildSource
from mypy.fscache import FileSystemCache
from mypy.options import Options
orig_create_source_list = mypy.main.create_source_list
def create_source_list(paths: Sequence[str], options: Options,
fscache: Optional[FileSystemCache] = None,
allow_empty_dir: bool = True) -> List[BuildSource]:
return orig_create_source_list(paths, options, fscache, allow_empty_dir)
mypy.main.create_source_list = create_source_list
mypy.main.main(None, sys.stdout, sys.stderr)
The patched version should be called with:
python mypy-custom.py [<options>] <path>
If you pass a folder without any .py files, you should get an exit code of 0 with the following output:
Nothing to do?!
Success: no issues found in 0 source files
As suggested by #Numerlor, the patch can also be written like so:
import sys
from functools import partial
import mypy.find_sources
from mypy import api
orig_create_source_list = mypy.find_sources.create_source_list
create_source_list = partial(orig_create_source_list, allow_empty_dir=True)
mypy.find_sources.create_source_list = create_source_list
result = api.run(sys.argv[1:])
if result[0]:
print(result[0])
if result[1]:
print(result[1])
exit(result[2])
With this patch, the allow_empty_dir=False option becomes unavailable to the rest of the module, but it doesn't seem to break anything... yet!

The patching method has a major inconvenient of negating the benefit of compiled libraries.
To avoid this, you could use a bash script that checks for the presence of .py files in the directory tree before calling mypy:
#! /bin/bash
if [ -n "$(find "$1" -type f -name '*.py')" ]; then
mypy "$#"
fi
With this, you would have to pass the directory as the first argument, then the options.
If you want the folder to be the last argument, you can substitute "$1" with "${*: -1}" in the find command.

Related

call an executable string from python

I'm trying to find a way to run an executable script that can be downloaded from the web from Python, without saving it as a file. The script can be python code or bash or whatever - it should execute appropriately based on the shebang. I.e. if the following were saved in a file called script, then I want something that will run ./script without needing to save the file:
#!/usr/bin/env python3
import sys
from my_module import *
scoped_hash = sys.argv[1]
print(scoped_hash)
I have a function that reads such a file from the web and attempts to execute it:
def execute_artifact(command_string):
os.system('sh | ' + command_string)
Here's what happens when I call it:
>>> print(string)
'#!/usr/bin/env python3\nimport sys\nfrom my_module import *\n\nscoped_hash = sys.argv[1]\n\nobject_string = read_artifact(scoped_hash)\nparsed_object = parse_object(object_string)\nprint(parsed_object)\n'
>>> execute_artifact(string)
sh-3.2$ Version: ImageMagick 7.0.10-57 Q16 x86_64 2021-01-10 https://imagemagick.org
Copyright: © 1999-2021 ImageMagick Studio LLC
License: https://imagemagick.org/script/license.php
Features: Cipher DPC HDRI Modules OpenMP(4.5)
Delegates (built-in): bzlib freetype gslib heic jng jp2 jpeg lcms lqr ltdl lzma openexr png ps tiff webp xml zlib
Usage: import [options ...] [ file ]
Bizarrely, ImageMagick is called. I'm not sure what's going on, but I'm sure there's a better way to do this. Can anyone help me?
EDIT: This answer was added before OP updated requirements to include:
The script can be python code or bash or whatever - it should execute appropriately based on the shebang.
Some may still find the below helpful if they decided to try to parse the shebang themselves:
Probably, the sanest way to do this is to pass the string to the python interpreter as standard input:
import subprocess
p = subprocess.Popen(["python"], stdin=subprocess.PIPE)
p.communicate(command_string.encode())
My instinct tells me this entire thing is fraught with pitfalls. Perhaps, at least, you want to launch it using the same executable that launched your current process, so:
import subprocess
import sys
p = subprocess.Popen([sys.executable], stdin=subprocess.PIPE)
p.communicate(command_string.encode())
If you want to use arguments, I think using the -c option to pass in code as a string as an argument works, then you have access to the rest, so:
import subprocess
import sys
command_string = """
import sys
print(f"{sys.argv=}")
"""
completed_process = subprocess.run([sys.executable, "-c", command_string, "foo", "bar", "baz"])
The above prints:
sys.argv=['-c', 'foo', 'bar', 'baz']
This cannot be done in full generality.
If you want the shebang line to be interpreted as usual, you must write the script to a file. This is a hard requirement of the protocol that makes shebangs work. When a script with a shebang line is executed by the operating system, the kernel (and yes, it’s not the shell which does it, unlike what the question implies) reads the script and invokes the interpreter specified in the shebang, passing the pathname of the script as a command line argument. For that mechanism to work, the script must exist in the file system where the interpreter can find it. (It’s a rather fragile design, leading to some security issues, but it is what it is.)
Many interpreters will allow you to specify the program text on standard input or on the command line, but it is nowhere guaranteed that it will work for any interpreter. If you know you are working with an interpreter which can do it, you can simply try to parse the shebang line yourself and invoke the interpreter manually:
import io
import subprocess
import re
_RE_SHBANG = re.compile(br'^#!\s*(\S+)(?:\s+(.*))?\s*\n$')
def execute(script_body):
stream = io.BytesIO(script_body)
shebang = stream.readline()
m = _RE_SHBANG.match(shebang)
if not m:
# not a shebang
raise ValueError(shebang)
interp, arg = m.groups()
arg = (arg,) if arg is not None else ()
return subprocess.call([interp, *arg, '-c', script_body])
The above will work for POSIX shell and Python scripts, but not e.g. for Perl, node.js or standalone Lua scripts, as the respective interpreters take the -e option instead of -c (and the latter doesn’t even ignore shebangs in code given on the command line, so that needs to be separately stripped too). Feeding the script to the interpreter through standard input is also possible, but considerably more involved, and will prevent the script itself from using the standard input stream. That is also possible to overcome, but it doesn’t change the fact that it’s just a makeshift workaround that isn’t anywhere guaranteed to work in the first place. Better to simply write the script to a file anyway.

Start Python REPL and execute command [duplicate]

I would like to play around in the python interpreter but with a bunch of imports and object setup completed. Right now I'm launching the interpreter on the command line and doing the setup work every time. Is there any way to launch the command line interpreter with all the initialization work done?
Ex:
# Done automatically.
import foo
import baz
l = [1,2,3,4]
# Launch the interpreter.
launch_interpreter()
>> print l
>> [1,2,3,4]
You can create a script with the code you wish to run automatically, then use python -i to run it. For example, create a script (let's call it script.py) with this:
import foo
import baz
l = [1,2,3,4]
Then run the script
$ python -i script.py
>>> print l
[1, 2, 3, 4]
After the script has completed running, python leaves you in an interactive session with the results of the script still around.
If you really want some things done every time you run python, you can set the environment variable PYTHONSTARTUP to a script which will be run every time you start python. See the documentation on the interactive startup file.
I use PYTHONSTARTUP.
My .bash_profile has a path to my home folder .pyrc, which as the import statements in it.
https://docs.python.org/3/using/cmdline.html#envvar-PYTHONSTARTUP
I came across this question when trying to configure a new desk for my research and found that the answers above didn't quite suit my desire: to contain the entire desk configuration within one file (meaning I wouldn't create a separate script.py as suggested by #srgerg).
This is how I ended up achieving my goal:
export PYTHONPATH=$READ_GEN_PATH:$PYTHONPATH
alias prepy="python3 -i -c \"
from naive_short_read_gen import ReadGen
from neblue import neblue\""
In this case neblue is in the CWD (so no path extension is required there), whereas naive_short_read_gen is in an arbitrary directory on my system, which is specified via $READ_GEN_PATH.
You could do this in a single line if necessary: alias prepy=PYTHONPATH=$EXTRA_PATH:$PYTHONPATH python3 -i -c ....
You can use the -s option while starting the command line. The details are given in the documentation here
I think I know what you want to do. You might want to check IPython, because you cannot start the python interpreter without giving the -i option (at least not directly).
This is what I did in my project:
def ipShell():
'''Starts the interactive IPython shell'''
import IPython
from IPython.config.loader import Config
cfg = Config()
cfg.TerminalInteractiveShell.confirm_exit = False
IPython.embed(config=cfg, display_banner=False)
# Then add the following line to start the shell
ipShell()
You need to be careful, though, because the shell will have the namespace of the module that the function ipShell() is defined. If you put the definition in the file you run, then you will be able to access the globals() you want. There could be other workarounds to inject the namespace you want, b̶u̶t̶ ̶y̶o̶u̶ ̶w̶o̶u̶l̶d̶ ̶h̶a̶v̶e̶ ̶t̶o̶ ̶g̶i̶v̶e̶ ̶a̶r̶g̶u̶m̶e̶n̶t̶s̶ ̶t̶o̶ ̶t̶h̶e̶ ̶f̶u̶n̶c̶t̶i̶o̶n̶ ̶i̶n̶ ̶t̶h̶a̶t̶ ̶c̶a̶s̶e̶.
EDIT
The following function defaults to caller's namespace (__main__.__dict__).
def ipShell():
'''Starts the interactive IPython shell
with the namespace __main__.__dict__'''
import IPython
from __main__ import __dict__ as ns
from IPython.config.loader import Config
cfg = Config()
cfg.TerminalInteractiveShell.confirm_exit = False
IPython.embed(config=cfg, user_ns=ns, display_banner=False)
without any extra arguments.

Scons ignores the environment for dependency tracking when using python function builders

I have an issue in scons 2.5.1 related to passing parameters through the environment to python based builders.
When an ordinary builder is called it seems like the result is flagged as dirty if any of the source files or the environment variables passed in have changed.
When using python function builders (described here http://scons.org/doc/1.2.0/HTML/scons-user/x3524.html) it seems like scons only care about the source files.
Here is a minimal artificial example of where it fails. It's two implementations of passing a parameter through the environment and writing it to the target file using the shell. One implementation is just a command string, the other uses python subprocess to invoke it in a python function. I use an argument to scons to select what builder to use.
#SConstruct
import subprocess
def echo_fun(env, source, target):
subprocess.check_call('echo %s > %s' % (env['MESSAGE'], str(target[0])), shell= True)
return None
env = Environment(BUILDERS = {'echo' : Builder(action='echo $MESSAGE > $TARGET'),
'echo_py': Builder(action=echo_fun),
})
build_fn = env.echo_py if ARGUMENTS.get('USE_PYTHON', False) else env.echo
build_fn(['test.file'], [], MESSAGE = ARGUMENTS.get('MSG', 'None'))
Here is the result of running the scons script with different parameters:
PS C:\work\code\sconsissue> scons -Q MSG=Hello
echo Hello > test.file
PS C:\work\code\sconsissue> scons -Q MSG=Hello
scons: `.' is up to date.
PS C:\work\code\sconsissue> scons -Q MSG=HelloAgain
echo HelloAgain > test.file
PS C:\work\code\sconsissue> del .\test.file
PS C:\work\code\sconsissue> scons -Q MSG=Hello -Q USE_PYTHON=True
echo_fun(["test.file"], [])
PS C:\work\code\sconsissue> scons -Q MSG=Hello -Q USE_PYTHON=True
scons: `.' is up to date.
PS C:\work\code\sconsissue> scons -Q MSG=HelloAgain -Q USE_PYTHON=True
scons: `.' is up to date.
In the case of using an ordinary builder it detects that the result is dirty when MSG changes (and clean when MSG stays the same) but in the python command version it considered it up to date even if MSG is changed.
A workaround for this would be to put my builder scripts in a separate python script and invoke that python script with the environment dependencies as command line parameters but it seems convoluted.
Is this the expected behavior or a bug?
Is there an easier workaround than the one I described above where I can keep my build functions in the SConstruct file?
This is expected behavior because there is no way for SCons to know that the function (as written) depends on MESSAGE.
However if you read the manpage
http://scons.org/doc/production/HTML/scons-man.html
You'll see this (Under "Action Objects"):
The variables may also be specified by a varlist= keyword parameter;
if both are present, they are combined. This is necessary whenever you
want a target to be rebuilt when a specific construction variable
changes. This is not often needed for a string action, as the expanded
variables will normally be part of the command line, but may be needed
if a Python function action uses the value of a construction variable
when generating the command line.
...
# Alternatively, use a keyword argument.
a = Action(build_it, varlist=['XXX'])
So if you rewrite as:
#SConstruct
import subprocess
def echo_fun(env, source, target):
subprocess.check_call('echo %s > %s' % (env['MESSAGE'], str(target[0])), shell= True)
return None
env = Environment(BUILDERS = {'echo' : Builder(action='echo $MESSAGE > $TARGET'),
'echo_py': Builder(action=Action(echo_fun, varlist=['MESSAGE'])),
})
build_fn = env.echo_py if ARGUMENTS.get('USE_PYTHON', False) else env.echo
build_fn(['test.file'], [], MESSAGE = ARGUMENTS.get('MSG', 'None'))
It should behave as you desire.

about argparse when using fabric

I am new to python & fabric.
I have a python module module1.py which would like to take in a command parameter, we use argparse inside the module1.py to require a command line parameter.
But we are trying to run the whole program through fabric, if I directly specify the command line when running through fab, I got
mycode.py: error: argument --config_yaml is required
How could I pass the argument through fab?
Thanks!
From what i can see from here:
https://github.com/fabric/fabric/blob/master/fabric/main.py#L340
https://github.com/fabric/fabric/blob/master/fabric/main.py#L619
You CAN'T do it. You can't add things to env_options before main.py runs, your code inside fabfile.py will run after main() has already been processed.
You can however do this:
Rename fabfile.py -> whatever you want as long as its not fabric.py. I called mine fabricc.py
from fabric.state import env_options, make_option
env_options.append(make_option('--myconfig', dest='config_file', default='default.ini'))
from fabric import main
from fabric.api import task, env
#task
def do_something():
print('config file: {}'.format(env.config_file))
if __name__ == '__main__':
main.find_fabfile = lambda x: __file__
main.main()
now run it:
$ python fabricc.py do_something
config file: default.ini
Done.
or..
$ python fabricc.py do_something --myconfig=somethinelse.ini
config file: somethinelse.ini
Done.
Word of warning DO NOT call it --config -- this is a builtin param.
With this you still enjoy everything you still love about fabric and nothing has changed.
$ python fabricc.py do_something --help
Usage: fab [options] <command>[:arg1,arg2=val2,host=foo,hosts='h1;h2',...] ...
Options:
-h, --help show this help message and exit
-d NAME, --display=NAME
...
-z INT, --pool-size=INT
number of concurrent processes to use in parallel mode
--myconfig=CONFIG_FILE
PS. i dont suggest you do the fabric override to handle it manually. Im just showing you it can be done. (the reason i dont condone this is: compatibility. If the version changes tomorrow this might break) best to use the product how the author meant it to be used.
====================================================================
Also, fabric is meant to be a fancy orm builder -- for lack of a better word. You cannot have command line arguments on a per-task-basis, it wasnt design to work like that. BUT it was designed to take in task-function arguments:
#task
def do_something(file=None):
print('config file: {}'.format(file or 'default.ini'))
$ fab dosomething:test
config file: override.ini
Done.
and that is what fabric was created to do.
its meant to be used like this:
#task
def environment(box_env=None):
...
#task
def deploy(branch='master'):
...
#task
def provision(recreate_if_existing=False, start_nignx=True):
...
fab evironment:dev deploy:development
fab evironment:dev provision:True,False
all together.
fab evironment:dev provision deploy:development
I know argparse, but not fabric. From the error is looks like your script defines a '--config_yaml' argparse argument (and makes it 'required'). But fabric apparently also uses that argument name.
I don't know if fabric also uses argparse. But it is common for programs like that to strip off the commandline arguments that it expects, and pass the rest on to your code.
Do you need to use --config_yaml in your script? And why is it set to required=True? argparse allows you to specify a required parameter, but ideally flagged arguments like this are optional. If not given they should have a reasonable default.
I see a --config=... argument in the fabric API, but not a --config_yaml.
I wonder if this section about per-task arguments is relevant
http://docs.fabfile.org/en/1.10/usage/fab.html#per-task-arguments
It sounds like you need to add the task name to the argument. fabric doesn't just pass all unknown arguments to the task (which is what I was assuming above). But I don't have fabric installed, so can't test it.

run python command line interpreter with imports loaded automatically

I would like to play around in the python interpreter but with a bunch of imports and object setup completed. Right now I'm launching the interpreter on the command line and doing the setup work every time. Is there any way to launch the command line interpreter with all the initialization work done?
Ex:
# Done automatically.
import foo
import baz
l = [1,2,3,4]
# Launch the interpreter.
launch_interpreter()
>> print l
>> [1,2,3,4]
You can create a script with the code you wish to run automatically, then use python -i to run it. For example, create a script (let's call it script.py) with this:
import foo
import baz
l = [1,2,3,4]
Then run the script
$ python -i script.py
>>> print l
[1, 2, 3, 4]
After the script has completed running, python leaves you in an interactive session with the results of the script still around.
If you really want some things done every time you run python, you can set the environment variable PYTHONSTARTUP to a script which will be run every time you start python. See the documentation on the interactive startup file.
I use PYTHONSTARTUP.
My .bash_profile has a path to my home folder .pyrc, which as the import statements in it.
https://docs.python.org/3/using/cmdline.html#envvar-PYTHONSTARTUP
I came across this question when trying to configure a new desk for my research and found that the answers above didn't quite suit my desire: to contain the entire desk configuration within one file (meaning I wouldn't create a separate script.py as suggested by #srgerg).
This is how I ended up achieving my goal:
export PYTHONPATH=$READ_GEN_PATH:$PYTHONPATH
alias prepy="python3 -i -c \"
from naive_short_read_gen import ReadGen
from neblue import neblue\""
In this case neblue is in the CWD (so no path extension is required there), whereas naive_short_read_gen is in an arbitrary directory on my system, which is specified via $READ_GEN_PATH.
You could do this in a single line if necessary: alias prepy=PYTHONPATH=$EXTRA_PATH:$PYTHONPATH python3 -i -c ....
You can use the -s option while starting the command line. The details are given in the documentation here
I think I know what you want to do. You might want to check IPython, because you cannot start the python interpreter without giving the -i option (at least not directly).
This is what I did in my project:
def ipShell():
'''Starts the interactive IPython shell'''
import IPython
from IPython.config.loader import Config
cfg = Config()
cfg.TerminalInteractiveShell.confirm_exit = False
IPython.embed(config=cfg, display_banner=False)
# Then add the following line to start the shell
ipShell()
You need to be careful, though, because the shell will have the namespace of the module that the function ipShell() is defined. If you put the definition in the file you run, then you will be able to access the globals() you want. There could be other workarounds to inject the namespace you want, b̶u̶t̶ ̶y̶o̶u̶ ̶w̶o̶u̶l̶d̶ ̶h̶a̶v̶e̶ ̶t̶o̶ ̶g̶i̶v̶e̶ ̶a̶r̶g̶u̶m̶e̶n̶t̶s̶ ̶t̶o̶ ̶t̶h̶e̶ ̶f̶u̶n̶c̶t̶i̶o̶n̶ ̶i̶n̶ ̶t̶h̶a̶t̶ ̶c̶a̶s̶e̶.
EDIT
The following function defaults to caller's namespace (__main__.__dict__).
def ipShell():
'''Starts the interactive IPython shell
with the namespace __main__.__dict__'''
import IPython
from __main__ import __dict__ as ns
from IPython.config.loader import Config
cfg = Config()
cfg.TerminalInteractiveShell.confirm_exit = False
IPython.embed(config=cfg, user_ns=ns, display_banner=False)
without any extra arguments.

Categories