What is the common header format of Python files? - python

I came across the following header format for Python source files in a document about Python coding guidelines:
#!/usr/bin/env python
"""Foobar.py: Description of what foobar does."""
__author__ = "Barack Obama"
__copyright__ = "Copyright 2009, Planet Earth"
Is this the standard format of headers in the Python world?
What other fields/information can I put in the header?
Python gurus share your guidelines for good Python source headers :-)

Its all metadata for the Foobar module.
The first one is the docstring of the module, that is already explained in Peter's answer.
How do I organize my modules (source files)? (Archive)
The first line of each file shoud be #!/usr/bin/env python. This makes it possible to run the file as a script invoking the interpreter implicitly, e.g. in a CGI context.
Next should be the docstring with a description. If the description is long, the first line should be a short summary that makes sense on its own, separated from the rest by a newline.
All code, including import statements, should follow the docstring. Otherwise, the docstring will not be recognized by the interpreter, and you will not have access to it in interactive sessions (i.e. through obj.__doc__) or when generating documentation with automated tools.
Import built-in modules first, followed by third-party modules, followed by any changes to the path and your own modules. Especially, additions to the path and names of your modules are likely to change rapidly: keeping them in one place makes them easier to find.
Next should be authorship information. This information should follow this format:
__author__ = "Rob Knight, Gavin Huttley, and Peter Maxwell"
__copyright__ = "Copyright 2007, The Cogent Project"
__credits__ = ["Rob Knight", "Peter Maxwell", "Gavin Huttley",
"Matthew Wakefield"]
__license__ = "GPL"
__version__ = "1.0.1"
__maintainer__ = "Rob Knight"
__email__ = "rob#spot.colorado.edu"
__status__ = "Production"
Status should typically be one of "Prototype", "Development", or "Production". __maintainer__ should be the person who will fix bugs and make improvements if imported. __credits__ differs from __author__ in that __credits__ includes people who reported bug fixes, made suggestions, etc. but did not actually write the code.
Here you have more information, listing __author__, __authors__, __contact__, __copyright__, __license__, __deprecated__, __date__ and __version__ as recognized metadata.

I strongly favour minimal file headers, by which I mean just:
#!/usr/bin/env python # [1]
"""\
This script foos the given bars [2]
Usage: myscript.py BAR1 BAR2
"""
import os # standard library, [3]
import sys
import requests # 3rd party packages
import mypackage # local source
[1] The hashbang if, and only if, this file should be able to be directly executed, i.e. run as myscript.py or myscript or maybe even python myscript.py. (The hashbang isn't used in the last case, but providing it gives users the choice of executing it either way.) The hashbang should not be included if the file is a module, intended just to be imported by other Python files.
[2] Module docstring
[3] Imports, grouped in the standard way, ie. three groups of imports, with a single blank line between them. Within each group, imports are sorted. The final group, imports from local source, can either be absolute imports as shown, or explicit relative imports.
Everything else is a waste of time - both for the author and for subsequent maintainers. It wastes the precious visual space at the top of the file with information that is better tracked elsewhere, and is easy to get out of date and become actively misleading.
If you have legal disclaimers or licensing info, it goes into a separate file. It does not need to infect every source code file. Your copyright should be part of this. People should be able to find it in your LICENSE file, not random source code.
Metadata such as authorship and dates is already maintained by your source control. There is no need to add a less-detailed, erroneous, and out-of-date version of the same info in the file itself.
I don't believe there is any other data that everyone needs to put into all their source files. You may have some particular requirement to do so, but such things apply, by definition, only to you. They have no place in “general headers recommended for everyone”.

The answers above are really complete, but if you want a quick and dirty header to copy'n paste, use this:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""Module documentation goes here
and here
and ...
"""
Why this is a good one:
The first line is for *nix users. It will choose the Python interpreter in the user path, so will automatically choose the user preferred interpreter.
The second one is the file encoding. Nowadays every file must have a encoding associated. UTF-8 will work everywhere. Just legacy projects would use other encoding.
And a very simple documentation. It can fill multiple lines.
See also: https://www.python.org/dev/peps/pep-0263/
If you just write a class in each file, you don't even need the documentation (it would go inside the class doc).

Also see PEP 263 if you are using a non-ascii characterset
Abstract
This PEP proposes to introduce a syntax to declare the encoding of
a Python source file. The encoding information is then used by the
Python parser to interpret the file using the given encoding. Most
notably this enhances the interpretation of Unicode literals in
the source code and makes it possible to write Unicode literals
using e.g. UTF-8 directly in an Unicode aware editor.
Problem
In Python 2.1, Unicode literals can only be written using the
Latin-1 based encoding "unicode-escape". This makes the
programming environment rather unfriendly to Python users who live
and work in non-Latin-1 locales such as many of the Asian
countries. Programmers can write their 8-bit strings using the
favorite encoding, but are bound to the "unicode-escape" encoding
for Unicode literals.
Proposed Solution
I propose to make the Python source code encoding both visible and
changeable on a per-source file basis by using a special comment
at the top of the file to declare the encoding.
To make Python aware of this encoding declaration a number of
concept changes are necessary with respect to the handling of
Python source code data.
Defining the Encoding
Python will default to ASCII as standard encoding if no other
encoding hints are given.
To define a source code encoding, a magic comment must
be placed into the source files either as first or second
line in the file, such as:
# coding=<encoding name>
or (using formats recognized by popular editors)
#!/usr/bin/python
# -*- coding: <encoding name> -*-
or
#!/usr/bin/python
# vim: set fileencoding=<encoding name> :
...

What I use in some project is this line in the first line for Linux machines:
# -*- coding: utf-8 -*-
As a DOC & Author credit, I like simple string in multiline. Here an example from Example Google Style Python Docstrings
# -*- coding: utf-8 -*-
"""Example Google style docstrings.
This module demonstrates documentation as specified by the `Google Python
Style Guide`_. Docstrings may extend over multiple lines. Sections are created
with a section header and a colon followed by a block of indented text.
Example:
Examples can be given using either the ``Example`` or ``Examples``
sections. Sections support any reStructuredText formatting, including
literal blocks::
$ python example_google.py
Section breaks are created by resuming unindented text. Section breaks
are also implicitly created anytime a new section starts.
Attributes:
module_level_variable1 (int): Module level variables may be documented in
either the ``Attributes`` section of the module docstring, or in an
inline docstring immediately following the variable.
Either form is acceptable, but the two should not be mixed. Choose
one convention to document module level variables and be consistent
with it.
Todo:
* For module TODOs
* You have to also use ``sphinx.ext.todo`` extension
.. _Google Python Style Guide:
http://google.github.io/styleguide/pyguide.html
"""
Also can be nice to add:
"""
#Author: ...
#Date: ....
#Credit: ...
#Links: ...
"""
Additional Formats
Meta-information markup | devguide
"""
:mod:`parrot` -- Dead parrot access
===================================
.. module:: parrot
:platform: Unix, Windows
:synopsis: Analyze and reanimate dead parrots.
.. moduleauthor:: Eric Cleese <eric#python.invalid>
.. moduleauthor:: John Idle <john#python.invalid>
"""
/common-header-python
#!/usr/bin/env python3 Line 1
# -*- coding: utf-8 -*- Line 2
#----------------------------------------------------------------------------
# Created By : name_of_the_creator Line 3
# Created Date: date/month/time ..etc
# version ='1.0'
# ---------------------------------------------------------------------------
Also I report similarly to other answers
__author__ = "Rob Knight, Gavin Huttley, and Peter Maxwell"
__copyright__ = "Copyright 2007, The Cogent Project"
__credits__ = ["Rob Knight", "Peter Maxwell", "Gavin Huttley",
"Matthew Wakefield"]
__license__ = "GPL"
__version__ = "1.0.1"
__maintainer__ = "Rob Knight"
__email__ = "rob#spot.colorado.edu"
__status__ = "Production"

Related

Dealing with à in generating code for a string literal using Python 3.5's AST module, need to open with right coding

To generate JavaScript from Python in the Transcrypt Python to JS compiler, Python 3.5's ast module is used in combination with the following code:
class Generator (ast.NodeVisitor):
...
...
def visit_Str (self, node):
self.emit (repr (node.s)) # Simplified to need less context on StackOverflow
...
...
This works fine e.g. for the following line of Python:
test = "âäéèêëiîïoôöùüû"
which is correctly translated to:
var test = 'âäéèêëiîïoôöùüû';
Only the character à gives problems:
test = "àâäéèêëiîïoôöùüû"
is translated to:
var test = 'Ĝxa0âäéèêëiîïoôöùüû';
Is there any way to have the ast module read the source file respecting coding directives like:
# coding=<encoding name>
To open a Python file for parsing, use
tokenize.open
rather than the ordinary
open
function.
It will open, read the pep263 coding hint and return the open file as if it were opened by the ordinary open using the right encoding.
Quite hard to find, not currently in the Green Tree Snakes doc. Actually found it by searching for 'coding' in the CPython sources on GitHub.
Have created an issue for Green Tree Snakes doc to add this.

Version of python script

I was wondering if it is possible to set a version to a python script. I would like to be able to see the version of a script by right clicking on the file, selecting properties and then select the version tab.
This tab exists on many other files, but is it possible to lure it out on a .py/.pyw file?
Those are textfiles and thus they don't contain any header to include such information.
Specify the version with __version__ attribute and ask Microsoft to write this functionality.
__version__ is a string, but sometimes it's also a float or tuple. You should also make sure that the version number conforms to the format described in PEP 440 (PEP 386 a previous version of this standard).
__version__ = '1.0'
or
__version__ = (1, 0)
__version__ = sys.version[:3]

How can I obtain a PE file's instructions using Python?

So I'm trying to write a basic disassembler for a school project using Python. I'm using the pydasm and capstone libraries. What I don't understand is how I can actually access the assembly instructions of a program using Python. These libraries allow me to disassemble instructions, but I can't figure out how to access a program's instructions in Python. Could anyone give me some direction?
Thanks.
You should be cautious about the code section as the base of the code section might not contain only code (imports or read only data might be present at this location).
The best way to start a disassembly is by looking at the AddressOfEntryPoint field in the IMAGE_OPTIONAL_HEADER which indicates the first executed byte in the PE file (except if TLS is present but that's another subject).
A very good library for browsing PE files in python is pefile.
Here's an example to get the first 10 bytes at the program entry point:
#!/usr/local/bin/python2
# -*- coding: utf8 -*-
from __future__ import print_function
import sys
import os.path
import pefile
def find_entry_point_section(pe, eop_rva):
for section in pe.sections:
if section.contains_rva(eop_rva):
return section
return None
def main(file_path):
print("Opening {}".format(file_path))
try:
pe = pefile.PE(file_path, fast_load=True)
# AddressOfEntryPoint if guaranteed to be the first byte executed.
eop = pe.OPTIONAL_HEADER.AddressOfEntryPoint
code_section = find_entry_point_section(pe, eop)
if not code_section:
return
print("[+] Code section found at offset: "
"{:#x} [size: {:#x}]".format(code_section.PointerToRawData,
code_section.SizeOfRawData))
# get first 10 bytes at entry point and dump them
code_at_oep = code_section.get_data(eop, 10)
print("[*] Code at EOP:\n{}".
format(" ".join("{:02x}".format(ord(c)) for c in code_at_oep)))
except pefile.PEFormatError as pe_err:
print("[-] error while parsing file {}:\n\t{}".format(file_path,
pe_err))
if __name__ == '__main__':
if len(sys.argv) < 2:
print("[*] {} <PE_Filename>".format(sys.argv[0]))
else:
file_path = sys.argv[1]
if os.path.isfile(file_path):
main(file_path)
else:
print("[-] {} is not a file".format(file_path))
Simply pass the name of your PE file as the first argument.
In the above code the code_at_oep variable holds the first few bytes of the entry point. From there you can pass this bytes to the capstone engine.
Note that these first bytes might simply be a jmp or call instruction, so you'll have to follow the code execution in order to disassemble correctly. Disassembling correctly a program is still an open problem in computer science...
This depends on what OS you're using. You have some other questions about Linux, so I'm assuming you're using that. On Linux, executables are typically in ELF format, so you'll need a python library to read that or else to use some other tool to extract the part of the ELF file that you want.
The actual instructions are stored in the .text section, so if you extract that section's contents, those should be the raw bytes you want to disassemble.
so here is my code that disassembles your exe file and gives you an output in x86 assembly language I used pefile and capstone library
#!/usr/bin/python
import pefile
from capstone import *
# load the target PE file
pe = pefile.PE("/file/path/code.exe")
# get the address of the program entry point from the program header
entrypoint = pe.OPTIONAL_HEADER.AddressOfEntryPoint
# compute memory address where the entry code will be loaded into memory
entrypoint_address = entrypoint+pe.OPTIONAL_HEADER.ImageBase
# get the binary code from the PE file object
binary_code = pe.get_memory_mapped_image()[entrypoint:entrypoint+100]
# initialize disassembler to disassemble 32 bit x86 binary code
disassembler = Cs(CS_ARCH_X86, CS_MODE_32)
# disassemble the code
for instruction in disassembler.disasm(binary_code, entrypoint_address):
print("%s\t%s" %(instruction.mnemonic, instruction.op_str))
make sure to change and give the correct path of your exe file, also you need to specify how many instructions you want to print out at line number 15. here [entrypoint:entrypoint+100] I specify only 100 instructions but you can change it.

What is "_csv" in Python?

In attempting to read the source code for the csv.py file (as a guide to implementing my own writer class in another context) I found that much of the functionality in that file is, in turn, imported from something called _csv:
from _csv import Error, __version__, writer, reader, register_dialect, \
unregister_dialect, get_dialect, list_dialects, \
field_size_limit, \
QUOTE_MINIMAL, QUOTE_ALL, QUOTE_NONNUMERIC, QUOTE_NONE, \
__doc__
I cannot find any file with this name on my system (including searching for files with the Hidden attribute set), although I can do import _csv from the Python shell.
What is this module and is it possible to read it?
_csv is the C "backbone" of the csv module. Its source is in Modules/_csv.c. You can find the compiled version of this module from the Python command prompt with:
>>> import _csv
>>> _csv
<module '_csv' from '/usr/lib/python2.6/lib-dynload/_csv.so'>
There are no hidden files in the Python source code :)
Not to disagree with larsmans answer.
There is an official explanation of the module naming convention in PEP8:
When an extension module written in C or C++ has an accompanying Python module that provides a higher level (e.g. more object oriented) interface, the C/C++ module has a leading underscore (e.g. _socket)
An answer has already been accepted, but I wanted to add a few details as I had the same question as the OP.
A .csv file's content is just a comma separated document with values:
first,last
john,doe
...
You can test this by opening up a .csv file in notepad
If you want to create a .csv file you can write to it and you will get the csv file that you want.
with open('test.csv', 'w') as file:
file.write('first,last\n')
file.write('John,Doe\n')
Simple solution if you want to write to a .csv file without using "import csv".

Is there a version of ConfigParser that deals with files with no section headers?

I have a config file which is mainly used in shell scripts, and therefore has the following format:
# Database parameters (MySQL only for now)
DBHOST=localhost
DATABASE=stuff
DBUSER=mypkguser
DBPASS=zbxhsxhg
# Storage locations
STUFFDIR=/var/mypkg/stuff
GIZMODIR=/var/mypkg/gizmo
Now I need to read its values from a Python (2.6) script. I would like not to reinvent the wheel and parse it with descriptor.readlines() and looking for equal signs and skipping lines beginning with '#' and dealing with quoted values and blah blah blah boring. I tried using ConfigParser but it doesn't like files that don't have section headers. Do I have any options or will I have to do the boring thing?
Oh, by the way, wrapping a shell script around the Python script is not an option. It has to run within Apache.
I'm not aware of such a module, but as a quick and dirty hack - just add the [section] before the file-content and you can use ConfigParser as intended!
from io import StringIO
filename = 'ham.egg'
vfile = StringIO(u'[Pseudo-Sectio]\n%s' % open(filename).read())

Categories