Compile a Python application to C - python

I have made an application in Python. It contains several plugins, organized into different subdirectories. I need to compile entirely to C code to improve security of source code. I have dealt with Cython, but cannot find how to compile the entire directory, with all plugin dependencies. I need a way to compile each of the dependencies to C, and that the application runs from C compiled.
http://docs.cython.org/src/quickstart/build.html
How to compile and link multiple python modules (or packages) using cython?

Python does not compile to native code. Scripts can be "frozen" with a few different tools, which makes them into standalone executables, but it's not actually compiling to C, it's packaging the script (or just its Python byte code representation) with a copy of the interpreter and all of its dependencies; the binary still has all the Python code (or the trivial byte code transform thereof) in it.
Cython lets you compile a syntactic variant of Python into Python C extensions, but they still run on the Python interpreter, and they still expose enough information to reverse the transformation.
Get the proper legal protections in place and freeze your Python executable if you like (freezing is enough to make the source code "non-obvious" even if anyone who went to trivial effort could get it back), but Python does not compile to plain C directly (if it did, I'd expect the CPython reference interpreter to do that more often just to gain performance with the built-in modules, yet they only write C accelerators by hand).

Related

How to compile a whole Python library (including dependencies) so that it can be used in C?

How to compile a whole Python library along with it's dependencies so that it can be used in C (without invoking Python's runtime). That is, the compiled code has the Python interpreter embedded and Python does not need to be installed on the system.
From my understanding, when Python code is compiled using Cython it:
Does not invoke the python runtime if the --embed argument is used
Compiles files individually
Allows for different modules to be called (from the Python runtime / other compiled Cython files)
The question which are still unclear are:
How to use these module files from C? Can the compiled Python files call other compiled Python files when used in C?
Does only the library entry point need to be declared or do all functions need to be declared?
How to manage the Python dependencies? how to compile them too (so that the Python runtime is not needed).
A simplified example for a python library called module where __init__.py is an empty file:
module/
├── run.py
├── http/
│ ├── __init__.py
│ ├── http_request.py
http_requests.py contains:
import requests
def get_ip():
r = requests.get('https://ipinfo.io/ip')
print(r.text)
and run.py contains the following:
from http import http_request
if __name__ == '__main__':
http_request.get_ip()
How to call the function get_ip from C without using the Python runtime (needing to have Python installed when running the application).
The above example is very simple. The actual use case is collecting/processing robotics data in C at a high sampling rate. Whilst C is great for basic data processing there are excellent Python libraries which allow for much more comprehensive analysis. The objective would be to call the Python libraries on data which has been partially processed in C. This would allow us to get a much more detailed understanding of the data (and process it in "real time"). The data frameworks are way too large for our team to rewrite in C.
How to compile a whole Python library along with it's dependencies so that it can be used in C (without invoking Python's runtime).
This is impossible in general. Python code is practically expected to run on a Python interpreter.
Sometimes, when only a small subset of Python is used (even indirectly by everything your Python code is using) you might use Cython (which is actually a superset of a small subset of Python: a lot of genuine Python features cannot be used from Cython, or uses the Python interpreter). But not every Python code can be cythonized, since Python and C have a very different (and incompatible) semantics (and memory management).
Otherwise (and most often), the C code using your Python stuff should embed the Python interpreter.
A wiser and more robust approach, if your goal is to make a self-sufficient C library usable from many C programs (on systems without Python), is to rewrite your code in C.
You could also consider starting (in your C library) some Python process (server-like, doing your Python stuff) and using inter-process communication facilities, that would be operating system specific. Of course Python needs to be installed on the system of the application using your library. For example, for Linux, you might fork some Python process in your library, and use pipe(7) or unix(7) sockets to communication from the C library to that process (perhaps using something like JSONRPC).
Your edit (still not an MCVE) shows some HTTP interaction done in Python. You could consider doing that in C, with the help of HTTP client libraries in C like libcurl, or (if so needed) of HTTP server libraries like libonion.
So consider rewriting your stuff in C but using several existing C libraries (how and what to choose is a very different question, probably off-topic on StackOverflow). Otherwise, accept the dependencies on Python.
The actual use case is collecting/processing robotics data in C at a high sampling rate. Whilst C is great for basic data processing there are excellent Python libraries which allow for much more comprehensive analysis.
You could keep high-level things in Python (see this) but recode low level things in C to accelerate them (many software are doing that, e.g. TensorFlow, ...), perhaps as extensions in C for Python or in some other process. Of course, that means some development efforts. I don't think that avoiding Python entirely is reasonable (getting rid of Python entirely is not pragmatical), if you use a lot of code in Python. BTW, you might perhaps consider embedding some other language in your C application (e.g. Lua, Guile, Ocaml - all of them are rumored to be faster than Python) and keep Python for the higher level, running in some other process.
You need to put more efforts on the architectural design of your thing. I'm not sure that avoiding Python entirely is a wise thing to do. Mixing Python and C (perhaps by having several processes cooperating) could be wiser. Of course you'll have operating system specific stuff (notably on the C side, for inter process communication). If on Linux, read something about Linux system programming in C, e.g. ALP or something newer.

How to compile a Python package to a dll

Well, I have a Python package. I need to compile it as dll before distribute it in a way easily importable. How? You may suggest that *.pyc. But I read somewhere any *.pyc can be easily decompiled!
Update:
Follow these:
1) I wrote a python package
2) want to distribute it
3) do NOT want distribute the source
4) *.pyc is decompilable >> source can be extracted!
5) dll is standard
Write everything you want to hide in Cython, and compile it to pyd. That's as close as you can get to making compiled python code.
Also, dll is not a standard, not in Python world. They're not portable, either.
Nowadays a simple solutino exists: use Nuitka compiler as described in Nuitka User Manual
Use Case 2 - Extension Module compilation
If you want to compile a single extension module, all you have to do is this:
python -m nuitka --module some_module.py
The resulting file some_module.so can then be used instead of some_module.py.
You need to compile for each platform you want to support and write some initialization code to import so/pyd file ~~appropriate for given platform/python version etc.~~
[EDIT 2021-12]: Actually in python 3 the proper so/dll is determined automatically based on the file name (if it includes python version and platform - can't find PEP for this feature at the moment but Nuitka creates proper names for compiled modules). So for python 2.7 the library name would be something.pyd or something.so whereas for python 3 this would change to something.cp36-win32.pyd or something.cpython-36m-x86_64-linux-gnu.so (for 32bit python 3.6 on x86).
The result is not DLL as requested but Python-native compiled binary format (it is not bytecode like in pyc files; the so/pyd format cannot be easily decompiled - Nuitka compiles to machine code through C++ translation)
EDIT [2020-01]: The compiled module is prone to evaluation methods using python standard mechanisms - e.g. it can be imported as any other module and get its methods listed etc. To secure implementation from being exposed that way there is more work to be done than just compiling to a binary module.
You can use py2exe.org to convert python scripts into windows executables. Granted this will only work on windows, but it's better then nothing.
You can embed python inside C. The real trick is converting between C values and Python values. Once you've done that, though, making a DLL is pretty straightforward.
However, why do you need to make a dll? Do you need to use this from a non-python program?
Python embedding is supported in CFFI version 1.5, you can create a .dll file which can be used by a Windows C application.
I would also using Cython to generate pyd files, like Dikei wrote.
But if you really want to secure your code, you should better write the important stuff in C++. The best would be to combine both C++ and Python. The idea: you would leave the python code open for adjustments, so that you don't have to compile everything over and over again. That means, you would write the "core" in C++ (which is the most secure solution these days) and use those dll files in your python code. It really depends what kind of tool or program you are building and how you want to execute it. I create mostly an execution file (exe,app) once I finish a tool or a program, but this is more for the end user. This could be done with py2exe and py2app (both 64 bit compatible). If you implement the interpreter, the end user's machine doesn't have to have python installed on the system.
A pyd file is the same like a dll and fully supported inside python. So you can normally import your module. You can find more information about it here.
Using and generating pyd files is the fastest and easiest way to create safe and portable python code.
You could also write real dll files in C++ and import them with ctypes to use them (here a good post and here the python description of how it works)
To expand on the answer by Nick ODell
You must be on Windows for DLLs to work, they are not portable.
However the code below is cross platform and all platforms support run-times so this can be re-compiled for each platform you need it to work on.
Python does not (yet) provide an easy tool to create a dll, however you can do it in C/C++
First you will need a compiler (Windows does not have one by default) notably Cygwin, MinGW or Visual Studio.
A basic knowledge of C is also necessary (since we will be coding mainly in C).
You will also need to include the necessary headers, I will skip this so it does not become horribly long, and will assume everything is set up correctly.
For this demonstration I will print a traditional hello world:
Python code we will be converting to a DLL:
def foo(): print("hello world")
C code:
#include "Python.h" // Includes everything to use the Python-C API
int foo(void); // Declare foo
int foo(void) { // Name of our function in our DLL
Py_Initialize(); // Initialise Python
PyRun_SimpleString("print('hello world')"); // Run the Python commands
return 0; // Finish execution
}
Here is the tutorial for embedding Python. There are a few extra things that should be added here, but for brevity I have left those out.
Compile it and you should have a DLL. :)
That is not all. You will need to distribute whatever dependencies are needed, that will mean the python36.dll run-time and some other components to run the Python script.
My C coding is not perfect, so if anyone can spot any improvements please comment and I will do my best to fix the it.
It might also be possible in C# from this answer How do I call a specific Method from a Python Script in C#?, since C# can create DLLs, and you can call Python functions from C#.
You can use pyinstaller for converting the .py files into executable with all required packages into .dll format.
Step 1. pip install pyinstaller,
step 2. new python file let's name it code.py .
step 3. Write some lines of code i.e print("Hello World")
step 4. Open Command Prompt in the same location and write pyinstaller code.py hit enter. Last Step see in the same location two folders name build, dist will be created. inside dist folder there is folder code and inside that folder there is an exe file code.exe along with required .dll files.
If your only goal is to hide your source code, it is much simpler to just compile your code to an executable(use PyInstaller, for example), and use an module with readable source for communication.
NOTE: You might need more converter functions as shown in this example.
Example:
Module:
import subprocess
import codecs
def _encode_str(str):
encoded=str.encode("utf-32","surrogatepass")
return codecs.encode(encoded,"base64").replace(b"\n",b"")
def _decode_str(b64):
return codecs.decode(b64,"base64").decode("utf-32","surrogatepass")
def strlen(s:str):#return length of str;int
proc=subprocess.Popen(["path_to_your_exe.exe","strlen",_encode_str(str).decode("ascii")],stdout=subprocess.PIPE)
return int(proc.stdout.read())
def random_char_from_string(str):
proc=subprocess.Popen(["path_to_your_exe.exe","randchr",_encode_str(str).decode("ascii")],stdout=subprocess.PIPE)
return _decode_str(proc.stdout.read())
Executable:
import sys
import codecs
import random
def _encode_str(str):
encoded=str.encode("utf-32","surrogatepass")
return codecs.encode(encoded,"base64").replace(b"\n",b"")
def _decode_str(b64):
return codecs.decode(b64,"base64").decode("utf-32","surrogatepass")
command=sys.argv[1]
if command=="strlen":
s=_decode_str(sys.argv[2].encode("ascii"))
print(len(str))
if command=="randchr":
s_decode_str(sys.argv[2].encode("ascii"))
print(_encode_str(random.choice(s)).decode("ascii"))
You might also want to think about compiling different executables for different platforms, if your package isn't a windows-only package anyways.
This is my idea, it might work. I don't know, if that work or not.
1.Create your *.py files.
2.Rename them into *.pyx
3.Convert them into *.c files using Cython
4.Compile *.c into *.dll files.
But I don't recommend you because it won't work on any other platforms, except Windows.
Grab Visual Studio Express and IronPython and do it that way? You'll be in Python 2.7.6 world though.

Can PyPy/RPython be used to produce a small standalone executable?

(Or, "Can PyPy/RPython be used to compile/translate Python to C/C++ without requiring the Python runtime?")
I have tried to comprehend PyPy with its RPython and its Python, its running and its compiling and its translating, and have somewhat failed.
I have a hypothetical Python project (for Windows); I would like to keep its size down, in the order of a hundred kilobytes (O.N.O.) rather than the several megabytes that using py2exe entails (after UPX). Can I use PyPy1 in any way to produce a standalone executable which does not depend on Python26.dll? If I can, does it need to follow the RPython restrictions like for only working on builtin types, or is it full Python syntax?
I do realise that if this can be done I almost certainly couldn't use C modules from Python directly.
1 (Since the time of asking, the situation has become clearer, and this part of the toolchain is more clearly branded as RPython rather than PyPy; it wasn't so in 2010.)
Yes, PyPy can produce standalone executables from RPython code. That means, you need to follow all the awkward RPython rules when it comes to writing code. Your Python code is completely unlikely to function out of the box and porting existing Python code is usually not fun. It won't make executables as small as C, but for example rpystone target (from pypy/translator/goal) using boehm GC is 80k on 64bit after stripping.

Why do C programs require decompilers but python programs dont?

If I write a python script, anyone can simply point an editor to it and read it. But for programming written in C, one would have to use decompilers and hex tables and such. Why is that? I mean I simply can't open up the Safari web browser and look at its code.
Note: The author disavows a deep expertise in this subject. Some assertions may be incorrect.
Python actually is compiled into bytecode, which is what gets run by the python interpreter. Whenever you use a Python module, Python will generate a .pyc file with a name corresponding to the module. This is the equivalent of the .o file that's generated when you compile a C file.
So if you want something to disassemble, the .pyc file would be it :)
The process that Python goes through when compiling a module is pretty similar to what gcc or another C compiler does with C source code. The major difference is that it happens transparently as part of execution of the file. It's also optional: when running a non-module, i.e. an end-user script, Python will just interpret the code rather than compiling it first.
So really your question is "Why are python programs distributed as source rather than as compiled modules?" Or, put another way, "Why are C applications distributed as compiled binaries rather than as source code?"
It used to be very common for C applications to be distributed as source code. This was back before operating systems and their various subentities (i.e. linux distributions) became more established. Some distros, for example gentoo, still distribute apps as source code. Apps which are a bit more cutting edge or obscure are still distributed as source code for all platforms they target.
The reason for this is compatibility, and dependencies. The reason you can run the precompiled binary Safari on a Mac, or Firefox on Ubuntu Linux, is because it's been specifically built for that operating system, architecture (e.g. x86_64), and set of libraries.
Unfortunately, compilation of a large app is pretty slow, and needs to be redone at least partially every time the app is updated. Thus the motivation for binary distributions.
So why not create a binary distribution of Python? For one thing, as Aaron mentions, modules would need to be recompiled for each new version of the Python bytecode. But this would be similar to rebuilding a C app to link with a newer version of a dynamic library — Python modules are analogous in this sense to C libraries.
The real reason is that Python compilation is very much quicker than C compilation. This is in part, I think, because of the dynamic nature of the language, and also because it's not as thorough of a compilation. This has its tradeoffs: in particular, Python apps run much more slowly than do their C counterparts, because Python has to interpret the compiled bytecode into instructions for the processor, whereas the C app already contains such instructions.
That all being said, there is a program called py2exe that will take a Python module and distribution and build a precompiled windows executable, including in it the logic of the module and its dependencies, including Python itself. I guess the point of this is to avoid having to coerce people into installing Python on their Windows system just to run your app. Under linux, or I think even OS/X, Python is usually already installed, so precompilation is not really necessary. Linux systems also have super-dandy package managers that will transparently install dependencies such as Python if they are not already installed.
Python is a script language, runs in a virtual machine through an interpeter.
C is a compiled language, the code compiled to binary code which the computer can run without all that extra stuff Python needs.
This is sorta a big topic. You should look into your local friendly Computer Science curriculum, you'll find a lot of great stuff on this subject there.
The short answer is the Python is an "interpreted" language, which means that it requires a machine language program (the python interpreter) to run the python program, adding a layer of indirection. C or C++ are different. They are compiled directly to machine code, which runs directly on your processor.
There is a lot of additional voodoo to be learned here, however. Technically Python is compiled to a bytecode, and modern interpreters do more and more "Just in Time" compilation, so the boundaries between compiled and interpreted code are getting fuzzier all the time.
In several comments you asked: "Is it then possible to compile python to an executable binary file and then simply distribute that?"
From a theoretical viewpoint, there's no question the answer is yes -- a Python program could be compiled to, and distributed as, fully compiled machine code.
From a practical viewpoint, it's open to a lot more question. There are a few things like Unladen Swallow, Psyco, Shed Skin, and PyPy that you might want to know about though.
Unladen Swallow is primarily an attempt at making Python run faster, but part of the plan to do so involves using LLVM for its back-end. LLVM can (among other things) produce native machine code output. The last couple of releases of Unladen Swallow have used LLVM for native code generation, but 1) the most recent update on the web site is from late 2009, and 2) the release notes for that version say: "The Unladen Swallow team does not recommend wide adoption of the 2009Q3 release."
Psyco works as a plug-in for Python that basically does JIT compilation, so even though it can speed up execution (quite a lot in some cases), it doesn't produce a machine-code executable you can distribute. In short, while it's sort of similar to what you want, it's not intended to do exactly what you've asked for.
Shed Skin Python-to-C++ produces C++ as its output, and you then compile the C++ and (potentially) distribute the result of that. Shedskin is currently at version 0.5 -- i.e., nobody's claiming that it's a finished, released product. On the other hand, development is ongoing, and each release does seem to include pretty substantial improvements.
PyPy is a Python implementation written in Python. Their intent is to allow code production to be "plugged in" without affecting the rest of the implementation -- but while they currently support 4 different code generation models, I don't believe any of them results in producing native machine code that runs directly on the hardware.
Bottom line: work has been done and is being done with the intent of doing what you asked about, but at least to my knowledge there's not really anything I could reasonably recommend as a finished product that you can really depend on to do the job right now. The primary emphasis is really on execution speed, not producing standalone executables.
Yes, you can - it's called disassembling, and allows you to look at the code of Safari perfectly well. The thing is, C, among other languages, compiles to native code, i.e. code that your CPU can "understand" and execute.
More or less obviously, the level of abstraction present in the instruction set of your CPU is much smaller than that of a high level language like Python. The CPU instructions are not concerned with "downloading that URI", but more "check if that bit is set in a hardware register".
So, in conclusion, the level of complexity present in a native application is much higher when looking at the machine code, so many people simply can't make any sense of what is going on there, it's hard to get the big picture. With experience and time at your hands, it is possible though - people do it all the time, reversing applications and all.
you can't open up and read the code that actually runs for python either. Try
import dis
def foo():
for i in range(100):
print i
print dis.dis(foo)
That will show you the (human readable) bytcode of the foo program. equivalently, you can save the file and import it from the interactive python interpreter. This will create a .pyc file with the same basename as the script. open that with a hex editor and you are looking at the actually python bytecode.
The reason for the difference is that python changes up it's byte code between releases so that you would either need to distribute a different version of a binary only release for each version of python. This would be a pain.
With C, it's compiled to native code and so the byte code is much more stable making binary only releases possible.
because C code is complied to object (machine) code and python code is compiled into an intermediate byte code. I am not sure if you are even referring to the byte code of python - you must be referring to the source file itself which is directly executable (hiding the byte code from you!). C needs to be compiled and linked.
Python scripts are parsed and converted to binary only when they're run - i.e., they're text files and you can read them with an editor.
C code is compiled and linked to an executable binary file before they can be run. Normally, only this executable binary file is distributed - hence you need a decompiler. You can always view the source code, if you've access to it.
Not all C programs require decompilers. There's lots of C code distributed in source form. And some Python programs do require decompilers, if distributed as bytecode (.pyc files).
But, to the extent that your assumptions are valid, it's because C is a compiled language while Python is an interpreted language.
Python scripts are analogous to a man looking at a to-do list written in English (or language he understands). The man has to do all the work, every time that list of things has to be done.
If the man, instead of doing the steps on his own each time, creates and programs a robot which can carry out those steps again and again (and probably faster than him), that robot is analogous to the C program.
The man in the python case is called the "interpreter" and in the C case is called the "compiler", and the C robot is called the compiled program/executable.
When you look at the python program source, you see the to-do list. In case of the robot, you see the gears, motors and batteries, etc, which look very different from the to-do list. If you could get hold of the C "to-do" list, it looks somewhat like the python code, just in a different language.
G-WAN executes ANSI C scripts on the fly -making it just like Python scripts.
This can be server-side scripts (using G-WAN as a Web server) or any general-purpose C program and you can link any existing library.
Oh, and G-WAN C scripts are much faster than Python, PHP or Java...

If Python is interpreted, what are .pyc files?

Python is an interpreted language. But why does my source directory contain .pyc files, which are identified by Windows as "Compiled Python Files"?
I've been given to understand that
Python is an interpreted language...
This popular meme is incorrect, or, rather, constructed upon a misunderstanding of (natural) language levels: a similar mistake would be to say "the Bible is a hardcover book". Let me explain that simile...
"The Bible" is "a book" in the sense of being a class of (actual, physical objects identified as) books; the books identified as "copies of the Bible" are supposed to have something fundamental in common (the contents, although even those can be in different languages, with different acceptable translations, levels of footnotes and other annotations) -- however, those books are perfectly well allowed to differ in a myriad of aspects that are not considered fundamental -- kind of binding, color of binding, font(s) used in the printing, illustrations if any, wide writable margins or not, numbers and kinds of builtin bookmarks, and so on, and so forth.
It's quite possible that a typical printing of the Bible would indeed be in hardcover binding -- after all, it's a book that's typically meant to be read over and over, bookmarked at several places, thumbed through looking for given chapter-and-verse pointers, etc, etc, and a good hardcover binding can make a given copy last longer under such use. However, these are mundane (practical) issues that cannot be used to determine whether a given actual book object is a copy of the Bible or not: paperback printings are perfectly possible!
Similarly, Python is "a language" in the sense of defining a class of language implementations which must all be similar in some fundamental respects (syntax, most semantics except those parts of those where they're explicitly allowed to differ) but are fully allowed to differ in just about every "implementation" detail -- including how they deal with the source files they're given, whether they compile the sources to some lower level forms (and, if so, which form -- and whether they save such compiled forms, to disk or elsewhere), how they execute said forms, and so forth.
The classical implementation, CPython, is often called just "Python" for short -- but it's just one of several production-quality implementations, side by side with Microsoft's IronPython (which compiles to CLR codes, i.e., ".NET"), Jython (which compiles to JVM codes), PyPy (which is written in Python itself and can compile to a huge variety of "back-end" forms including "just-in-time" generated machine language). They're all Python (=="implementations of the Python language") just like many superficially different book objects can all be Bibles (=="copies of The Bible").
If you're interested in CPython specifically: it compiles the source files into a Python-specific lower-level form (known as "bytecode"), does so automatically when needed (when there is no bytecode file corresponding to a source file, or the bytecode file is older than the source or compiled by a different Python version), usually saves the bytecode files to disk (to avoid recompiling them in the future). OTOH IronPython will typically compile to CLR codes (saving them to disk or not, depending) and Jython to JVM codes (saving them to disk or not -- it will use the .class extension if it does save them).
These lower level forms are then executed by appropriate "virtual machines" also known as "interpreters" -- the CPython VM, the .Net runtime, the Java VM (aka JVM), as appropriate.
So, in this sense (what do typical implementations do), Python is an "interpreted language" if and only if C# and Java are: all of them have a typical implementation strategy of producing bytecode first, then executing it via a VM/interpreter.
More likely the focus is on how "heavy", slow, and high-ceremony the compilation process is. CPython is designed to compile as fast as possible, as lightweight as possible, with as little ceremony as feasible -- the compiler does very little error checking and optimization, so it can run fast and in small amounts of memory, which in turns lets it be run automatically and transparently whenever needed, without the user even needing to be aware that there is a compilation going on, most of the time. Java and C# typically accept more work during compilation (and therefore don't perform automatic compilation) in order to check errors more thoroughly and perform more optimizations. It's a continuum of gray scales, not a black or white situation, and it would be utterly arbitrary to put a threshold at some given level and say that only above that level you call it "compilation"!-)
They contain byte code, which is what the Python interpreter compiles the source to. This code is then executed by Python's virtual machine.
Python's documentation explains the definition like this:
Python is an interpreted language, as
opposed to a compiled one, though the
distinction can be blurry because of
the presence of the bytecode compiler.
This means that source files can be
run directly without explicitly
creating an executable which is then
run.
There is no such thing as an interpreted language. Whether an interpreter or a compiler is used is purely a trait of the implementation and has absolutely nothing whatsoever to do with the language.
Every language can be implemented by either an interpreter or a compiler. The vast majority of languages have at least one implementation of each type. (For example, there are interpreters for C and C++ and there are compilers for JavaScript, PHP, Perl, Python and Ruby.) Besides, the majority of modern language implementations actually combine both an interpreter and a compiler (or even multiple compilers).
A language is just a set of abstract mathematical rules. An interpreter is one of several concrete implementation strategies for a language. Those two live on completely different abstraction levels. If English were a typed language, the term "interpreted language" would be a type error. The statement "Python is an interpreted language" is not just false (because being false would imply that the statement even makes sense, even if it is wrong), it just plain doesn't make sense, because a language can never be defined as "interpreted."
In particular, if you look at the currently existing Python implementations, these are the implementation strategies they are using:
IronPython: compiles to DLR trees which the DLR then compiles to CIL bytecode. What happens to the CIL bytecode depends upon which CLI VES you are running on, but Microsoft .NET, GNU Portable.NET and Novell Mono will eventually compile it to native machine code.
Jython: interprets Python sourcecode until it identifies the hot code paths, which it then compiles to JVML bytecode. What happens to the JVML bytecode depends upon which JVM you are running on. Maxine will directly compile it to un-optimized native code until it identifies the hot code paths, which it then recompiles to optimized native code. HotSpot will first interpret the JVML bytecode and then eventually compile the hot code paths to optimized machine code.
PyPy: compiles to PyPy bytecode, which then gets interpreted by the PyPy VM until it identifies the hot code paths which it then compiles into native code, JVML bytecode or CIL bytecode depending on which platform you are running on.
CPython: compiles to CPython bytecode which it then interprets.
Stackless Python: compiles to CPython bytecode which it then interprets.
Unladen Swallow: compiles to CPython bytecode which it then interprets until it identifies the hot code paths which it then compiles to LLVM IR which the LLVM compiler then compiles to native machine code.
Cython: compiles Python code to portable C code, which is then compiled with a standard C compiler
Nuitka: compiles Python code to machine-dependent C++ code, which is then compiled with a standard C compiler
You might notice that every single one of the implementations in that list (plus some others I didn't mention, like tinypy, Shedskin or Psyco) has a compiler. In fact, as far as I know, there is currently no Python implementation which is purely interpreted, there is no such implementation planned and there never has been such an implementation.
Not only does the term "interpreted language" not make sense, even if you interpret it as meaning "language with interpreted implementation", it is clearly not true. Whoever told you that, obviously doesn't know what he is talking about.
In particular, the .pyc files you are seeing are cached bytecode files produced by CPython, Stackless Python or Unladen Swallow.
These are created by the Python interpreter when a .py file is imported, and they contain the "compiled bytecode" of the imported module/program, the idea being that the "translation" from source code to bytecode (which only needs to be done once) can be skipped on subsequent imports if the .pyc is newer than the corresponding .py file, thus speeding startup a little. But it's still interpreted.
To speed up loading modules, Python caches the compiled content of modules in .pyc.
CPython compiles its source code into "byte code", and for performance reasons, it caches this byte code on the file system whenever the source file has changes. This makes loading of Python modules much faster because the compilation phase can be bypassed. When your source file is foo.py , CPython caches the byte code in a foo.pyc file right next to the source.
In python3, Python's import machinery is extended to write and search for byte code cache files in a single directory inside every Python package directory. This directory will be called __pycache__ .
Here is a flow chart describing how modules are loaded:
For more information:
ref:PEP3147
ref:“Compiled” Python files
THIS IS FOR BEGINNERS,
Python automatically compiles your script to compiled code, so called byte code, before running it.
Running a script is not considered an import and no .pyc will be created.
For example, if you have a script file abc.py that imports another module xyz.py, when you run abc.py, xyz.pyc will be created since xyz is imported, but no abc.pyc file will be created since abc.py isn’t being imported.
If you need to create a .pyc file for a module that is not imported, you can use the py_compile and compileall modules.
The py_compile module can manually compile any module. One way is to use the py_compile.compile function in that module interactively:
>>> import py_compile
>>> py_compile.compile('abc.py')
This will write the .pyc to the same location as abc.py (you can override that with the optional parameter cfile).
You can also automatically compile all files in a directory or directories using the compileall module.
python -m compileall
If the directory name (the current directory in this example) is omitted, the module compiles everything found on sys.path
Python (at least the most common implementation of it) follows a pattern of compiling the original source to byte codes, then interpreting the byte codes on a virtual machine. This means (again, the most common implementation) is neither a pure interpreter nor a pure compiler.
The other side of this is, however, that the compilation process is mostly hidden -- the .pyc files are basically treated like a cache; they speed things up, but you normally don't have to be aware of them at all. It automatically invalidates and re-loads them (re-compiles the source code) when necessary based on file time/date stamps.
About the only time I've seen a problem with this was when a compiled bytecode file somehow got a timestamp well into the future, which meant it always looked newer than the source file. Since it looked newer, the source file was never recompiled, so no matter what changes you made, they were ignored...
Python's *.py file is just a text file in which you write some lines of code. When you try to execute this file using say "python filename.py"
This command invokes Python Virtual Machine. Python Virtual Machine has 2 components: "compiler" and "interpreter". Interpreter cannot directly read the text in *.py file, so this text is first converted into a byte code which is targeted to the PVM (not hardware but PVM). PVM executes this byte code. *.pyc file is also generated, as part of running it which performs your import operation on file in shell or in some other file.
If this *.pyc file is already generated then every next time you run/execute your *.py file, system directly loads your *.pyc file which won't need any compilation(This will save you some machine cycles of processor).
Once the *.pyc file is generated, there is no need of *.py file, unless you edit it.
tldr; it's a converted code from the source code, which the python VM interprets for execution.
Bottom-up understanding: the final stage of any program is to run/execute the program's instructions on the hardware/machine. So here are the stages preceding execution:
Executing/running on CPU
Converting bytecode to machine code.
Machine code is the final stage of conversion.
Instructions to be executed on CPU are given in machine code. Machine code can be executed directly by CPU.
Converting Bytecode to machine code.
Bytecode is a medium stage. It could be skipped for efficiency, but sacrificing portability.
Converting Source code to bytecode.
Source code is a human readable code. This is what is used when working on IDEs (code editors) such as Pycharm.
Now the actual plot. There are two approaches when carrying any of these stages: convert [or execute] a code all at once (aka compile) and convert [or execute] the code line by line (aka interpret).
For example, we could compile a source code to bytecode, compile bytecode to machine code, interpret machine code for execution.
Some implementations of languages skip stage 3 for efficiency, i.e. compile source code into machine code and then interpret machine code for execution.
Some implementations skip all middle steps and interpret the source code directly for execution.
Modern languages often involve both compiling an interpreting.
JAVA for example, compiles source code to bytecode [that is how JAVA source is stored, as a bytecode, compile bytecode to machine code [using JVM], and interpret machine code for execution. [Thus JVM is implemented differently for different OSs, but the same JAVA source code could be executed on different OS that have JVM installed.]
Python for example, compile source code to bytecode [usually found as .pyc files accompanying the .py source codes], compile bytecode to machine code [done by a virtual machine such as PVM and the result is an executable file], interpret the machine code/executable for execution.
When can we say that a language is interpreted or compiled?
The answer is by looking into the approach used in execution. If it executes the machine code all at once (== compile), then it's a compiled language. On the other hand, if it executes the machine code line-by-line (==interpret) then it's an interpreted language.
Therefore, JAVA and Python are interpreted languages.
A confusion might occur because of the third stage, that's converting bytecode to machine code. Often this is done using a software called a virtual machine. The confusion occurs because a virtual machine acts like a machine, but it's actually not! Virtual machines are introduced for portability, having a VM on any REAL machine will allow us to execute the same source code. The approach used in most VMs [that's the third stage] is compiling, thus some people would say it's a compiled language. For the importance of VMs, we often say that such languages are both compiled and interpreted.
Python code goes through 2 stages. First step compiles the code into .pyc files which is actually a bytecode. Then this .pyc file(bytecode) is interpreted using CPython interpreter. Please refer to this link. Here process of code compilation and execution is explained in easy terms.
Its important distinguish language specification from language implementations:
Language specification is just a document with the formal specification of the language, with its context free grammar and definition of the semantic rules (like specifying primitive types and scope dynamics).
Language implementation is just a program (a compiler) that implement the use of the language according to its specification.
Any compiler consists of two independent parts: a frontend and backend. The frontend receives the source code, validate it and translate it into an intermediate code. After that, a backend translate it to machine code to run in a physical or a virtual machine.
An interpreter is a compiler, but in this case it can produce a way of executing the intermediate code directly in a virtual machine.
To execute python code, its necessary transform the code in a intermediate code, after that the code is then "assembled" as bytecode that can be stored in a file.pyc, so no need to compile modules of a program every time you run it.
You can view this assembled python code using:
from dis import dis
def a(): pass
dis(a)
Anyone can build a Compiler to static binary in Python language, as can build an interpreter to C language. There are tools (lex/yacc) to simplify and automate the proccess of building a compiler.
Machines don't understand English or any other languages, they understand only byte code, which they have to be compiled (e.g., C/C++, Java) or interpreted (e.g., Ruby, Python), the .pyc is a cached version of the byte code.
https://www.geeksforgeeks.org/difference-between-compiled-and-interpreted-language/
Here is a quick read on what is the difference between compiled language vs interpreted language, TLDR is interpreted language does not require you to compile all the code before run time and thus most of the time they are not strict on typing etc.

Categories