First of all, I want to apologize for even trying to do this. I know that it's not recommended in any way. However, external constraints leave me little other choice than to go this path.
I have a piece of python code that lies on a read-only filesystem. I cannot move it. I cannot modify it. It has an inconsistent use of tabs and spaces. And I need this code to be importable with the -tt option.
Is there any way to ignore the -tt option for a specific import statement, a specific code section, or a certain application altogether?
I fully admit that this is a horrible, horrible solution. I await the downvotes:
dodgymodule.py:
def somefunc():
print("This is indented using 4 spaces")
print("This is indented using a tab")
main python script, which uses autopep8 to fix the code and import the resulting string instead:
import autopep8
import imp
try:
import dodgymodule
except TabError as e:
with open(e.filename, 'r') as f:
new_module_contents = autopep8.fix_code(f.read())
dodgymodule = imp.new_module('dodgymodule')
exec(new_module_contents, dodgymodule.__dict__)
dodgymodule.somefunc()
python3 -tt script.py prints out the lines, as hoped.
Related
I'm a bit experienced without other languages but, novice with Python. I have come across made codes in jupyter notebooks where sys is imported.
I can't see the further use of the sys module in the code. Can someone help me to understand what is the purpose of importing sys?
I do know about the module and it's uses though but can't find a concise reason of why is it used in many code blocks without any further use.
If nothing declared within sys is actually used, then there's no benefit to importing it. There's not a significant amount of cost either.
Sys module is a rather useful module as it allows you to work with your System and those things. Eg:
You can access any command line arguments using sys.argv[1:]
You can see the Path to files.
Version of your Python Interpreter using sys.version
Exit the running code with sys.exit
Mostly you will use it for accessing the Command Line arguments.
I'm a new pythonista bro, I learned to import it whenever I want to exit the program with a nice exit text in red
import sys
name = input("What's your name? ")
if name == "Vedant":
print(f"Hello There {name}.")
else:
sys.exit(f"You're not {name}!")
The sys includes "functions + variable " to help you control and change the python environment #runtime.
Some examples of this control includes:
1- using other sources data as input via using:
sys.stdin
2- using data in the other resources via using:
sys.stdout
3- writing errors when an exception happens, automatically in :
sys.stderr
4- exit from the program by printing a message like:
sys.exit("Finish with the calculations.")
5- The built-in variable to list the directories which the interpreter will looking for functions in them:
sys.pasth
6- Use a function to realize the number of bytes in anonymous datatype via:
sys.getsizeof(1)
sys.getsizeof(3.8)
I am introducing the loggers into my project and I would like to ban the print statement usage in it. My intent is to force any future developers to use the loggers and to make sure I replaced all print invocations in project.
So far I managed to restrict print('foo') and print 'foo' like invocations with:
from __future__ import print_function
def print(*args, **kwargs):
raise SyntaxError("Don't use print! Use logger instead.")
But it is still possible to use print without arguments with intent of adding newline but it won't do anything.
Is it possible to do it without interpreter modifications?
EDIT:
I wasn't clear enough I guess from the comments. I just wanted to know if I can prevent the print function for being aliased
print("foo") # raises exception
print "foo" # doesn't work either
print # doesn't raise any exception, but I want it to
foo = print # this shouldn't work either like the one above, but it does
No, you can't prevent print statements from being used in code that doesn't use from __future__ import print_function. print statements are not hooked, they are compiled directly to a set of opcodes and the implementation of those opcodes just write directly to stdout or other file object (when using the >> notation).
You could go the drastic route of requiring a custom codec, but that's no better than requiring that from __future__ import print_function is used.
By the same token, if all code does use from __future__ import print_function, while you can assign a new print function to the __builtin__ module, you can't prevent someone from building their own version (named print or something else) that writes to sys.stdout, or from executing reload(__builtin__). Python is highly dynamic and flexible, I'd not try to lock this down.
The normal path to enforce coding standards is to use a linter, code review and tests. You can install hooks on most version control systems that prevent code from being checked in that doesn't pass a linter, and both pylint and flake8 support custom plugins. You can run a test that configures the logging module to direct all output to a file then raise an exception if anything is written to stdout, etc.
This is the path that Facebook uses (and it is not alone in this approach), where Python code must pass the Facebook flake8 configuration, which includes the flake8-bugbear extension, and code is autoformatted using Black to make it easy for developers to meet those requirements.
I suspect that I have issue in one of my loops, so I setup a break points with pdb.set_trace()
import pdb
for i in range(100):
print("a")
pdb.set_trace()
print("b")
after check variable in this loop for a few times, I decide continue this programming without further breaks. So I try to get the break number with b command, no breaks listed. I guess this line of code don't setup a break point. but How Do I get ride of this "break points" without stopping the program and change the code?
to my knowledge, you could not bypass set_trace, but you could neutralize it, once debugger stopped, type:
pdb.set_trace = lambda: 1
then continue, it wont break again.
Setting a breakpoint (requires Python 3.7):
breakpoint()
Disabling breakpoints set with the breakpoint() function:
import os
os.environ["PYTHONBREAKPOINT"] = "0"
Long story:
In the 3.7 version of Python, the breakpoint() built-in function for setting breakpoints was introduced. By default, it will call pdb.set_trace(). Also, since the 3.7 version of Python the PYTHONBREAKPOINT environment variable is available. It is considered when the breakpoint() function is used.
So, in order to disable these breakpoints (set with the breakpoint() function), one can just set the PYTHONBREAKPOINT environment variable like this:
import os
os.environ["PYTHONBREAKPOINT"] = "0"
It may be useful to mention here sys.breakpointhook() which was also added in the 3.7 version of Python and allows to customize breakpoints behavior.
Unfortunately pdb is missing a bunch of functionality (even basic stuff like display lists), and you've found another example of that here. The good news is that pdb++ is a great drop-in replacement for pdb, and one of the things it solves is exactly the problem of disabling set_trace. So you can simply do:
pip install pdbpp
and then at the (Pdb++) prompt, type
pdb.disable()
Easy! And you will get lots of other useful goodies on top of that.
It is possible to start a Python script without PDB control but then hit a stray set_trace() left there. To prevent breaking into debugger every time set_trace() is encountered, a similar trick as above (changing the symbol's reference to point to a harmless function) can be applied.
However, the namespace of the debuggee has to be modified, not one of the debugger itself. Simply overwriting pdb.set_trace = lambda:1 or set_trace = lambda:1 did not work for me.
The following trick worked from the pdb prompt:
pdb> globals()['set_trace'] = lambda:1
This line first calls globals() to get access to a dict of the program under debugging, and then modifies the reference of set_trace there.
One way around this is to not write the breakpoints in the script itself, but rather set breakpoints when you start python with python -m pdb my_script.py
You then get into a prompt first, before execution of the script starts, and you can write for example
(Pdb) b 321
to set a breakpoint on line 321 (you can also specify file and specify conditions b 321, i == 50)
Then
(Pdb) c
(for continue) to start the execution of the actual script. The breakpoints you set in this way, you can clear when you're done with them with:
(Pdb) cl
(for clear)
I'm making a Network sniffing tool for personal use, and I can't find the syntax error within my code, this is Python 2.7.9 by the way.
Here's the code;
def main():
global listen
global port
global command
global execute
global upload_destination
global target
if not len(sys.argv[1:]):
usage()
#read the commandline options
It says the error is featured below in the next 3 lines, any ideas?
try:
opts, args = getopt.getopt(sys.argv[1:],"hle:t:p:cu:", ¬ ["help","listen","execute","target","port","command","upload"])
except getopt.GetoptError as err:
print str(err)
usage()
I feel there's been a mix up between Python 2 and 3 but I'm not sure.
¬ ["help","listen","execute","target","port","command","upload"])
"¬" This is not valid Python syntax. Removing it should solve the issue.
Also in the future maybe post the actual error which is being shown in the output.
First, this is not valid in programs: ¬. This is Unicode, which basically doesn't work where you placed it all.. Since when does Python allow Unicode as commands in programs? It is not valid and in the wrong place. Now doing this will work:
print "¬"
It's a string so nothing wrong but the usage in your program makes that a Syntax error as there is no such command called ¬. Also, in the try statement, you have an indention of 8 spaces. You can only use 4 or 2-space indention in your programs.
EDIT: Okay, you can use 8-space indention in programs but you need to use 8 (or a multiple of 8) spaces every single line you need to indent. Since your indention is non-consistent, that could also be the reason you are getting an error.
In Python, what do you do when you write 100 lines of code and forget to add a bunch of loop statements somewhere?
I mean, if you add a while statement somewhere, you've to now indent all the lines below it. It's not like you can just put braces and be done with it. Go to every single line and add tabs/spaces. What if you were adding nested loops/if/then statements to existing code?
Am I missing some shortcut?
I think every serious editor or IDE supports the option to select multiple lines and press tab to indent or Shift-Tab to unindent all that lines.
in IDLE, the standard python IDE, select the code, go on 'format' and you can chooose indent region, dedent region and so on
You have to use an editor command to re-indent.
Keep in mind: Beautiful is better than ugly.
... and the rest of "The Zen of Python, by Tim Peters"
# python -c "import this"
edit: rewrote to accomodate fileinput's "eccentricities"*
def indent_code(filename, startline, endline):
from fileinput import input
from itertools import izip, count
all_remaining = count()
def print_lines(lines, prefix='', range=all_remaining):
for _, line in izip(range, lines):
print prefix + line,
lines = input(filename, inplace=1)
print_lines(lines, range=xrange(1, startline)) # 1-based line numbers
print_lines(lines, ' ', xrange(startline, endline + 1)) # inclusive
print_lines(lines)
def main():
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('filename')
parser.add_argument('startline', type=int)
parser.add_argument('endline', type=int)
ns = parser.parse_args()
indent_code(ns.filename, ns.startline, ns.endline)
if __name__ == '__main__':
main()
Well, either that or >}.
*: I originally wrote this using a nice, concise combination of stdout.writelines and some generator expressions. Unfortunately, that code didn't work. The iterator returned by fileinput.input() doesn't actually open a file until you call its next method. It works its sketchy output-redirection magic on sys.stdout at the same time. This means that if you call sys.stdout.writelines and pass it the fileinput.input iterator, your call, and the output, goes to the original standard out rather than the one remapped by fileinput to the file "currently" being processed. So you end up with the lines that are supposed to replace the contents of the file being instead just printed to the terminal.
It's possible to work around this issue by calling next on the fileinput iterator before calling stdout.writelines, but this causes other problems: reaching the end of the input file causes its handle to be closed from the iterator's next method when called within file.writelines. Under Python 2.6, this segfaults because there's no check made (in the C code which implements writelines) to see if the file is still open, and the file handle non-zero, after getting the next value from the iterator. I think under 2.7 it just throws an exception, so this strategy might work there.
The above code actually does test correctly.
textmate (and maybe e?): select then apple-]
bbedit:
also select then apple-]
emacs:
select then M-x 'indent-region'
bpython: don't know, autoindenting is
so easy in bpython, you'd have to
work to break it
xcode: don't do python in xcode
that's generally all I need to know. also yeah it's easy to slap a brace above or below a poorly indented block, but you know it's just going to confuse the shit out of you a week later when you haven't been staring at it for like a day. srsly u guys.