I am working with an ARM Cortex M3 on which I need to port Python (without operating system). What would be my best approach? I just need the core Python and basic I/O.
Golly, that's kind of a tall order. There are so many services of a kernel that Python depends upon, and that you'd have to provide yourself. I'd think you'd be far better off looking for a lightweight OS -- maybe Minix 3? -- to put on your embedded processor.
Failing that, I'd be horribly tempted to think about hand-translating to C and building the essentials on that.
You should definitely look at eLua:
http://www.eluaproject.net
"Embedded power, driven by Lua
Quickly prototype and develop embedded software applications with the power of Lua and run them on a wide range of microcontroller architectures"
There are a few projects that have attempted to port Python to the situation you mention, take a look at python-on-a-chip, PyMite or tinypy. These are aimed at lower power microcontrollers without an OS and tend to focus on slightly older versions of the Python language and reduced library support.
One possible approach is to build your own stack machine in software to interpret and execute Python byte code directly. Certainly not a porting job and quite labor-intensive to implement, but a self-contained Python byte code stack processor built for your embedded system gets you around needing an operating system.
Another approach is writing your own low level executive (one step below a general purpose OS) that contains the bare minimum in services that a core Python interpreter port requires. I am not certain if this is more or less labor intensive than building a stack processor.
I am not recommending either of these approaches - personally, I like Charlie Martin's Minix 3 approach best since it is a balanced requirements compromise. On the other hand, what I suggest might be interesting if your project absolutely requires Python without an operating system and if the project has an excellent time and money budget.
Update 5 Mar 2012: Given a strict adherence to your Python/No OS requirements, another possibility of a path to a solution may lie in using an OS-less Java VM (e.g., jnode, currently in beta) and use Jython to create Java byte code from Python. Certainly not an ideal off-the-shelf solution, and it does seem to meet an OS-less Python requirement.
Compile it to c :)
http://shed-skin.blogspot.com/
fyi I just ported CPython 2.7x to non-POSIX OS. That was easy.
You need write pyconfig.h in right way, remove most of unused modules. Disable unused features.
Then fix compile, link errors. Then it just works after fixing some simple problems on run.
If You have no some POSIX header, write one by yourself. Implement all POSIX functions, that needed, such as file i/o.
Took 2-3 weeks in my case. Although I have heavily customized Python core. Unfortunately cannot opensource it :(.
After that I think Python can be ported easily to any platform, that has enough RAM.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Is it possible to make a minimalistic operating system using Python?
I really don't want to get into low-level code like assembly, so I want to use a simple language like Perl, Python. But how?
Unfortunately Python is classified as a very high level programming language. It cannot be used, for example, to directly access hardware and perform low-level data structure manipulation. It is completely dependent on something to abstract the hardware from it, and that is the Kernel. It is, however, technically possible to create an operating system centered on Python, that is; have only the very low level stuff in written in C and assembly and have most of the rest of the operating system written in Python.
This article discusses with more detail what languages are suitable for writing operating system kernels.
You can certainly run Python without an OS, as shown by the The Intel BIOS Implementation Test Suite (BITS) Project. The scripting guide explains:
"... includes Python APIs to access various low-level functionality of the hardware platform, including ACPI, CPU and chipset registers, PCI, and PCI Express. You can write scripts to explore and test platform functionality, using the full power of Python in 32-bit ring 0, without an OS in the way.. "
Now, BITS is a BIOS testing platform specific to Intel hardware, and not meant to run a custom Python based OS, but that doesn't mean you couldn't try it...
I have ported Python interpreter to run in my operating system as a userspace program, it was the first program - and so far the only - that I ported; from this experience, I'd say it would certainly possible to write lots of the operating system functionality in Python; you can certainly even embed Python in the kernel with rather minimal feature support.
However you need to write assembly and C for the interrupts, low level memory management and so. In my case, I built a specially modified Python 2.5.2 against the Newlib C library; in minimal case you just need to provide heap memory management for the Newlib library, and you can have Python running on top of it.
As such, Python interpreter does not contain its own heap implementation, and it does depend on the C library, so you cannot run it on bare metal right away, but much more of the operating system kernel as is conventionally written, could also could be written in Python.
The special case of course are the microkernels, where much of the functionality is in userspace as services; these can be more naturally implemented in any preferred programming language, Python included.
I suggest you find a good textbook on operating system design, and study that. I'm pretty sure you won't find such a book with Python source code; C is more likely. (You might find an older textbook that uses Pascal instead of C, but it's really not that different.)
Once you have studied operating systems design enough to actually be able to write an operating system, you will know enough to have your own opinions on what languages would be suitable.
Is it possible to deploy python applications such that you don't release the source code and you don't have to be sure the customer has python installed?
I'm thinking maybe there is some installation process that can run a python app from just the .pyc files and a shared library containing the interpreter or something like that?
Basically I'm keen to get the development benefits of a language like Python - high productivity etc. but can't quite see how you could deploy it professionally to a customer where you don't know how there machine is set up and you definitely can't deliver the source.
How do professional software houses developing in python do it (or maybe the answer is that they don't) ?
You protect your source code legally, not technologically. Distributing py files really isn't a big deal. The only technological solution here is not to ship your program (which is really becoming more popular these days, as software is provided over the internet rather than fully installed locally more often.)
If you don't want the user to have to have Python installed but want to run Python programs, you'll have to bundle Python. Your resistance to doing so seems quite odd to me. Java programs have to either bundle or anticipate the JVM's presence. C programs have to either bundle or anticipate libc's presence (usually the latter), etc. There's nothing hacky about using what you need.
Professional Python desktop software bundles Python, either through something like py2exe/cx_Freeze/some in-house thing that does the same thing or through embedding Python (in which case Python comes along as a library rather than an executable). The former approach is usually a lot more powerful and robust.
Yes, it is possible to make installation packages. Look for py2exe, cx_freeze and others.
No, it is not possible to keep the source code completely safe. There are always ways to decompile.
Original source code can trivially be obtained from .pyc files if someone wants to do it. Code obfuscation would make it more difficult to do something with the code.
I am surprised no one mentioned this before now, but Cython seems like a viable solution to this problem. It will take your Python code and transpile it into CPython compatible C code. You also get a small speed boost (~25% last I checked) since it will be compiled to native machine code instead of just Python byte code. You still need to be sure the user has Python installed (either by making it a pre-requisite pushed off onto the user to deal with, or bundling it as part of the installer process). Also, you do need to have at least one small part of your application in pure Python: the hook into the main function.
So you would need something basic like this:
import cython_compiled_module
if __name__ == '__main__':
cython_compiled_module.main()
But this effectively leaks no implementation details. I think using Cython should meet the criteria in the question, but it also introduces the added complexity of compiling in C, which loses some of Python's easy cross-platform nature. Whether that is worth it or not is up to you.
As others stated, even the resulting compiled C code could be decompiled with a little effort, but it is likely much more close to the type of obfuscation you were initially hoping for.
Well, it depends what you want to do. If by "not releasing the source code" you mean "the customer should not be able to access the source code in any way", well, you're fighting a losing battle. Even programs written in C can be reverse engineered, after all. If you're afraid someone will steal from you, make them sign a contract and sue them if there's trouble.
But if you mean "the customer should not care about python files, and not be able to casually access them", you can use a solution like cx_Freeze to turn your Python application into an executable.
Build a web application in python. Then the world can use it via a browser with zero install.
Why should I use VM, like Parrot, for a dynamic language I use (Python, Perl, ...) if I already have an interpreter? What can I potentially gain, for the cost of having different VM between my code and my machine, and by using a separate interpreter?
(I am new in VM issue, so maybe the answer is obvious)
EDIT
What's the benefit of Parrot VM for end-users?
Why should I use VM, like Parrot, for a dynamic language I use (Python, Perl, ...) if I already have an interpreter?
First, if you're starting a project, then you may not already have an interpreter.
However, assuming you have an interpreter and are considering whether to add functionality to it or rewrite it to use Parrot, the tradeoffs that come to mind are:
In general, Parrot is probably better tested than the interpreter in question (better optimizer, better garbage collector, etc.)
In general, Parrot's developers know more about cross-platform issues than run-of-the-mill programmers
In general, Parrot has solved most problems you're likely to run into
Parrot was designed with complete generality in mind, and that added a ton of complexity; you may not need the extra generality
Personally, Parrot's optimizer (and register-based design, largely to make optimizations easier) and well tested cross platform codebase would be enough to convince me.
Parsing the ASCII source code is slow. It is faster if the source file gets parsed once, and then the interpreter uses a binary structure. In Python this structure gets stored in .pyc files for fast reuse.
There are two steps:
Parse the source, create byte code
Run (interpret) the byte code.
This is used by e.g. scala: There is no scala-VM. Scala is just a new syntax. The scala compiler creates java byte code.
Wouldn't it be possible to have an OS entirely in Python if the Python VM itself is build into a hardware? Something like the good old Lisp Machine?
Suppose I have a cpu that is the hardware implementation of the python virtual machine, then all programs written in python would perform with the speed of assembly, won't it (but Python is mostly interpreted but we can compile it)?
If we have such a 'python-microprocessor', what about the memory and other subsystems? Would it be compatible with the current memory.
Is there any information on the registers and the Python VM architecture, something similar to what we have for 8086?
Wouldn't it be possible to have an OS
entirely in Python if the Python VM
itself is build into a hardware?
Something like the good old Lisp
Machine?
Yes, theoretically it would be possible.
Suppose I have a cpu that is the
hardware implementation of the python
virtual machine, then all programs
written in python would perform with
the speed of assembly, won't it (but
Python is mostly interpreted but we
can compile it)?
Python doesn't have a speed, it's a language. The speed of the interpreter (in this case the processor) can be tested. But just as it's difficult to compare the performance of a RISC and a CISC processor, comparing Assembly with Python will be difficult too.
If we have such a
'python-microprocessor', what about
the memory and other subsystems? Would
it be compatible with the current
memory.
The python microprocessor would have to do the memory management (and thus the garbage collection). Since that's normally done by the interpreter, now the microprocessor has to do it.
Is there any information on the
registers and the Python VM
architecture, something similar to
what we have for 8086?
Normally you don't access the memory directly in Python, so the registers shouldn't be relevant here.
Similar things were tried for Java, but none really took the world by storm.
Yeah, it might be possible, but designing new hardware is expensive. Would the return on investment justify building such a toy? I'd guess not, otherwise someone would have tried it by now. :)
Suppose I have a cpu that is the hardware implementation of the python virtual machine, then all programs written in python would perform with the speed of assembly, won't it (but Python is mostly interpreted but we can compile it)?
Yes it would be assembly speed. See this link for a comparison with an avr microcontroller assembly code. http://pycpu.wordpress.com/code-examples/speed-pycpu-vs-8bit-avr/.
It is a hardware implemntation of a cpu that can do very very limited python bytecode. But enought for ifs conditions and while loops with simple integers.
In the 70ties such ideas were quite popular. The idea was to close the semantic gap between compilers/virtual machines and instruction set architectures, and thereby bring programming languages and hardware closer together. However, when Patterson and Ditzel published The Case for the Reduced Instruction Set Computer (PDF, 672KB) and after the success of RISC and the microprocessor, the idea of closing the semantic gap was basically dead.
Now, with ever increasing transistor counts the idea may become interesting again. But, as others already noted, designing chips is costly. You need a very good reason to sink so much money. But it is definitely possible. IBM and Azul have shown this with their massively parallel Java Chips.
I guess you should call Google and convince them that they urgently need a Python processor. ;-)
New operating systems are interesting and cool, and basing one off of python would be cool. Then again, linux is so good and has so much development for it already. It would have to be the "right time".
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 13 years ago.
Since I'm not strictly python developer please don't flame me just for the question.
I'm wondering about Python 3k, that from my point of view, might be some kind of misconception.
Or quite in-relevant step forward (I'm taking into account the 2.6 and 3k releases, which was almost one after another).
Before the flame will start, I'll explain my position at this topic and I'd assume couple facts from my work environment.
I work in a cutting edge market data solutions company,
we use mainly functional languages to high throughput data analysis.
But we also use python as a second technology for smaller tasks, scripts, process management & monitoring.
Some of my colleagues write more serious production applications based on python technologies,
but:
ALL of our customers use python 2.6,
because of above we have quite strong 2.6 toolset and internal/external support,
We still plan to develop 2.6 apps.
Every small tool development is also based on 2.6 platform.
Additional what I observe:
at this point any new linux distribution (in our infrastructure) has python 2.6 on the board,
most third party modules are developed for 2.6 version,
great part of resources in the network is also dedicated for 2.6
(I know that You can port most of this to 3.1.x, but it's too big overhead here.)
I know that Py 3k is still growing but,
there is already python 3.1.1, but no one "cares" (in my environment).
I've a strong feeling that Python 3k overheads are stopping it for moving this great technology into new dimension.
The OP appears to be surprised that a minor-release upgrade (which added some nifty features, basically broke zero existing code, and allowed trivial rebuilds of all existing third party libraries) happened overnight at their organization, while a major-release upgrade (requiring much more effort especially from the point of view of third-party library authors) didn't and won't. I think I mostly feel surprised at their surprise;-).
Even minor-release upgrades don't happen instantly in most large projects and organizations; for example, App Engine is still using Python 2.5 (apparently, upgrading its specialized "sandboxed" Python runtime and all it relies on is not a zero-effort proposition, so they prefer to keep putting their energy towards adding engine features instead) -- so I believe are implementations such as Jython and PyPy (I think IronPython's in the process of migrating to 2.6, but the current production version is still 2.5).
Totally new projects starting today should seriously consider starting with Python 3; for example, Allison Randal's pynie (Python for Parrot) project made exactly that choice (and, I think, it was the correct choice in their situation). Migrating existing projects is a harder proposition, and mostly depends on what third-party components the existing project depends on (if a new project intrinsically depends on some functionality that's only available in 3rd party libraries for 2.6, not 3.1, then the new project will probably also have to stick with the 2.6 version for the time being, of course).
Third party libraries that are under active development will probably come out with Python 3 version gradually (for example, gmpy did so relatively recently). Once enough such third party libraries are available, the chance that a missing library inhibits migrating an existing project (or, even more, starting a new project) using Python 3 starts going down pretty rapidly. This makes Python 3 ports feasible. At some point, some attractive functionality will become available in Python 3 only (for example, if and when pynie releases, that might be the case for a Parrot implementation affording smooth interoperability with Perl &c), and that will provide a strong motivation for some projects to go 3-only (pragmatically stronger than pure issues of language quality).
Even then, some sufficiently large projects and organizations will stick with Python 2 for a long time, and you can confidently expect that at some time a 2.7 will exist (possibly one or two more after that, but that's harder to predict). Hey, I sharply remember that throughout the '90s in most large projects and organizations "Fortran" still meant Fortran 77 (in fact in some places -- not many, 30+ years after than Fortran version's first release -- that's still true today!)... for all the advantages of Fortran 90 and later versions, migration costs (esp. in terms of various compilers, libraries, tools) were just perceived as being too high a price to pay for the advantages of the new language version. That's just inevitable when a language acquires a large installed base and a rich ecosystem of third party tools and libraries, as Python 2 now has. No reason for surprise!-)
Py3k is a good and necessary step in the evolution of the Python language. I like it much better than 2.x, and code in it almost exclusively. I'm fortunately not dependent on third-party libraries, few of which are there for Py3k yet. But they will be.
There is nothing wrong with using 2.6 - 2.7 will come and be even more of a bridge to 3.x, and you can start generating Py3k-like code already that will be trivial to port to 3.x once your favorite third-party library is there. I think I read somewhere that Py2k will be around for several more years.
What exactly is the question here? You observe that there's a newer version of the platform you currently use. You have good reason for staying with the version you have. This pattern is common across very many development organisations, you simply cannot chase every new version of the stuff yiou use. Nor can you stay where you are for ever, eventually you will need to migrate.
There are forces that may drive you to migrate, for example
The new version has some very major new feature you would really benefit from (eg. annotations in Java 5)
Support for a current platform (or dependent library) is being withdrawn
Meanwhile, you could attempt to future-proof your code so that migrating is easier. Are you familiar with the level of change required to move up?
Since Python 3K is not backwards compatible, you can expect that it will take more time to be accepted in corporate environments. Python has a huge code base and many important third libraries. One must wait for all dependencies to be converted to Python 3K before adopting it. This could be a real slow process.
From what I understand, the creators of Python expected this to happen, but they thought it will be worse not to make the changes and just let the language "die".
Python 3.x is not yet aimed at production use, but it will get there in time.
As a language and a runtime, Python 3.x is fine. The limitation is that a lot of important third party libraries and tools are still in the process of being ported and tested.
It is expected that the transition to Python 3.x will take up to five years.
So, if you are not library developer, you really don't need to care about Python 3.x for the time being.
Also, it is not expected of current Python users to do the switch to Python 3.x anytime soon.
There are a lot of libraries which depend on other libraries in the Python ecosystem. Nose is used to test numpy, and scipy requires numpy, and lots of scientific libraries require scipy. Also, none of the library developers can start porting until their dependencies are ported. It's going to take a long time for whole chain to port across.
A lot of the pragmatists are hoping that python 2.7 and 2.8 will gradually depreciate the bits that won't work in python 3 (introduce a print function called print_function, then depreciate the print statement, then rename print_function to print ...), so eventually python 2 will merge with python 3.
Here is a video there Guido van Rossum, at the Py4Science meeting, talks about Python 3.
http://fperez.org/py4science/2009_guido_ucb/index.html