How to embed Python3 with the standard library - python

I am attempting to embed Python in an (ultimately multiplatform) C++ app.
It's important that my app contains its own Python implementation (in the same way that blender does), so that it is entirely self-contained. (Else it becomes a configuration minefield).
I have two options:
Attempt to embed Python3 without the standard library (which I have asked here)
Attempt to embed Python3 with the standard library.
What is required for (2)?
With this information I would be able to balance the merits of each approach against the effort required to set it up.
My embedded Python will be for my own use (rather than any userland scripting) -- mainly control flow / game logic. I will need very little from the standard library -- maybe I can whittle that down to 0 by tunnelling back into C++ whenever necessary -- for example if I need a random number, I can create a C++ routine and access that from Python. I have all of that covered.
However, it is starting to look as though even a minimal installation will have to contain some stdlib component(s), which prompts the question: "If I must include some, maybe it is better to include all!"

Since this doesn't really have an answer, I will offer this for posterity. I also do not have access to a Mac, so it may be a little different for you than on Linux. Also, the required dependencies must be installed for this to work, but that is usually easy enough to figure out.
Create a working directory
mkdir ~/embeddedpython
cd ~/embeddedpython
Download the Python source
wget https://www.python.org/ftp/python/3.6.1/Python-3.6.1.tgz
Create an installation directory for Python
mkdir ./installation
Extract the downloaded source files
tar xvf Python-3.6.1.tgz
Enter the newly created source directory
cd Python-3.6.1
Configure Python to install in our installation directory
./configure --prefix="/home/<username>/embeddedpython/installation"
Make and install Python
make && make install
Go back to your working directory
cd ..
Create a new PYTHONHOME directory where the library will reside
mkdir home && mkdir home/lib
Copy the Python library to our new home directory
cp -r ./installation/lib/python3.6 ./home/lib/
Create a new c++ source file (embeddedpython.cpp) with the following code taken from the python documentation, with the exception of the setenv function call.
#include <Python.h>
#include <cstdlib>
int main(int argc, char *argv[])
{
setenv("PYTHONHOME", "./home", 1);
wchar_t *program = Py_DecodeLocale(argv[0], NULL);
if (program == NULL) {
fprintf(stderr, "Fatal error: cannot decode argv[0]\n");
exit(1);
}
Py_SetProgramName(program); /* optional but recommended */
Py_Initialize();
PyRun_SimpleString("from time import time,ctime\n"
"print('Today is', ctime(time()))\n");
if (Py_FinalizeEx() < 0) {
exit(120);
}
PyMem_RawFree(program);
return 0;
}
Compile and run
g++ embeddedpython.cpp -I ./installation/include/python3.6m/ ./installation/lib/libpython3.6m.a -lpthread -ldl -lutil
./a.out
> Today is Fri Apr 14 16:06:54 2017
From here on it is standard embedded python as usual. With this method, the "home" directory must be included in your deployment, and the environment variable PYTHONHOME must be set to point to it before any python related code is executed.

You are looking for Boost.Python!
It's a C++ library which enables seamless interoperability between C++ and the Python programming language and in my opinion this should suffice your need, unless you are trying to achieve something else.
It also has a mechanism for embedding the python interpreter into C++ code and one can refer to this link (URL isn't release specific) to delve into the possibilities.
P.S. I believe less in reinventing the wheel and more into the re-usability.

I suppose that you have already double check how to Embedding Python in Another Application (here you will see something which cover embedding python2 but will be true for python3 also in my opinion)
There is different types of embedding:
Very High Level Embedding
Beyond Very High Level Embedding
Pure Embedding
Embedding Python in C++
As your question is relative to "Embedding Python in C++" you may read this:
It is also possible to embed Python in a C++ program; precisely how
this is done will depend on the details of the C++ system used; in
general you will need to write the main program in C++, and use the
C++ compiler to compile and link your program. There is no need to
recompile Python itself using C++.
As one hand you said "(ultimately multiplatform) C++ app", and in the other hand you have "precisely how this is done will depend on the details of the C++ system used", so may you explain more details of the C++ system used ?
You may also find some tips here relative to the use of pybind11 module or another old page which treat about how to Embed Python and Import Modules in C/C++ (python2.6 but I hope you may found inspiration with)
To conclude:
You'll obviously need development packages for Python in order to have
the Python include directory

Related

Bundle all dependencies in Cython compile [duplicate]

I am trying to make one unix executable file from my python source files.
I have two file, p1.py and p2.py
p1.py :-
from p2 import test_func
print (test_func())
p2.py :-
def test_func():
return ('Test')
Now, as we can see p1.py is dependent on p2.py . I want to make an executable file by combining two files together. I am using cython.
I changed the file names to p1.pyx and p2.pyx respectively.
Now, I can make file executable by using cython,
cython p1.pyx --embed
It will generate a C source file called p1.c . Next we can use gcc to make it executable,
gcc -Os -I /usr/include/python3.5m -o test p1.c -lpython3.5m -lpthread -lm -lutil -ldl
But how to combine two files into one executable ?
People are tempted to do this because it's fairly easy to do for the simplest case (one module, no dependencies). #ead's answer is good but honestly pretty fiddly and it is handling the next simplest case (two modules that you have complete control of, no dependencies).
In general a Python program will depend on a range of external modules. Python comes with a large standard library which most programs use to an extent. There's a wide range of third party libraries for maths, GUIs, web frameworks. Even tracing those dependencies through the libraries and working out what you need to build is complicated, and tools such as PyInstaller attempt it but aren't 100% reliable.
When you're compiling all these Python modules you're likely to come across a few Cython incompatibilities/bugs. It's generally pretty good, but struggles with features like introspection, so it's unlikely a large project will compile cleanly and entirely.
On top of that many of those modules are compiled modules written either in C, or using tools such as SWIG, F2Py, Cython, boost-python, etc.. These compiled modules may have their own unique idiosyncrasies that make them difficult to link together into one large blob.
In summary, it may be possible, but for non-trivial programs it is not a good idea however appealing it seems. Tools like PyInstaller and Py2Exe that use a much simpler approach (bundle everything into a giant zip file) are much more suitable for this task (and even then they struggle to be really robust).
Note this answer is posted with the intention of making this question a canonical duplicate for this problem. While an answer showing how it might be done is useful, "don't do this" is probably the best solution for the vast majority of people.
There are some loops you have to jump through to make it work.
First, you must be aware that the resulting executable is a very slim layer which just delegates the whole work to (i.e. calls functions from) pythonX.Ym.so. You can see this dependency when calling
ldd test
...
libpythonX.Ym.so.1.0 => not found
...
So, to run the program you either need to have the LD_LIBRARY_PATH showing to the location of the libpythonX.Ym.so or build the exe with --rpath option, otherwise at the start-up of test dynamic loader will throw an error similar to
/test: error while loading shared libraries: libpythonX.Ym.so.1.0: cannot open shared object file: No such file or directory
The generic build command would look like following:
gcc -fPIC <other flags> -o test p1.c -I<path_python_include> -L<path_python_lib> -Wl,-rpath=<path_python_lib> -lpython3.6m <other_needed_libs>
It is also possible to build against static version of the python-library, thus eliminating run time dependency on the libpythonX.Ym, see for example this SO-post.
The resulting executable test behaves exactly the same as if it were a python-interpreter. This means that now, test will fail because it will not find the module p2.
One simple solution were to cythonize the p2-module inplace (cythonize p2.pyx -i): you would get the desired behavior - however, you would have to distribute the resulting shared-object p2.so along with test.
It is easy to bundle both extension into one executable - just pass both cythonized c-files to gcc:
# creates p1.c:
cython --empbed p1.pyx
# creates p2.c:
cython p2.pyx
gcc ... -o test p1.c p2.c ...
But now a new (or old) problem arises: the resulting test-executable cannot once again find the module p2, because there is no p2.py and no p2.so on the python-path.
There are two similar SO questions about this problem, here and here. In your case the proposed solutions are kind of overkill, here it is enough to initialize the p2 module before it gets imported in the p1.pyx-file to make it work:
# making init-function from other modules accessible:
cdef extern object PyInit_p2();
#init/load p2-module manually
PyInit_p2() #Cython handles error, i.e. if NULL returned
# actually using already cached imported module
# no search in python path needed
from p2 import test_func
print(test_func())
Calling the init-function of a module prior to importing it (actually the module will not be really imported a second time, only looked up in the cache) works also if there are cyclic dependencies between modules. For example if module p2 imports module p3, which imports p2in its turn.
Warning: Since Cython 0.29, Cython uses multi-phase initialization per default for Python>=3.5, thus calling PyInit_p2 is not enough (see e.g. this SO-post). To switch off this multi-phase initialization -DCYTHON_PEP489_MULTI_PHASE_INIT=0should be passed to gcc or similar to other compilers.
Note: However, even after all of the above, the embedded interpreter will need its standard libraries (see for example this SO-post) - there is much more work to do to make it truly standalone! So maybe one should heed #DavidW's advice:
"don't do this" is probably the best solution for the vast majority of
people.
A word of warning: if we declare PyInit_p2() as
from cpython cimport PyObject
cdef extern PyObject *PyInit_p2();
PyInit_p2(); # TODO: error handling if NULL is returned
Cython will no longer handle the errors and its our responsibility. Instead of
PyObject *__pyx_t_1 = NULL;
__pyx_t_1 = PyInit_p2(); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 4, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
produced for object-version, the generated code becomes just:
(void)(PyInit_p2());
i.e. no error checking!
On the other hand using
cdef extern from *:
"""
PyObject *PyInit_p2(void);
"""
object PyInit_p2()
will not work with g++ - one has to add extern C to declaration.

Wrapping C++ code with python (manually)

I have a main file(main.cpp) and a header file(nodes.hpp). The main file takes N(any positive integer) as input argument and by using the functions of header file it gives output say 'x & y' (both double).
Note:
Both main and header files are written in C++.
Both main and header files instead of using data structues as arrays,vectors, make use of Eigen Library.
I have to write a python wrapper for them, I have working knowledge of python but have never used any wrapper.
Can anybody please refer or give some notes about using python wrpper for such code?
Here are your options:
You can use ctypes, and I consider this the cleanest solution, because you convert your program to a shared library that can be called by any other software, not only Python. You, though, have to write a C-interface for your program yourself.
You can use Python C-Extension, and I consider this the worst solution, because it's very low level, and prone to memory leaks, and costs lots of time to implement one function, and is Python-version dependent. Basically this is good to start a Python interpreter inside your C++. You can create PyObjects (which is the main building block of any Python type) and deal with them insdie C/C++.
You can use SWIG, where it automatically creates the the interface that you have to create with ctypes through an interface file that you define. People say it's very good, but the documentation is not as good.
You can use Boost.Python, which is good, but it has a very ugly build system with bjam. If you can manage to bypass that, then it's even better than ctypes. I'm a big boost fan, but bjam is why I don't use this.
What I do typically is ctypes. I trust it because it emphasizes the single-reponsibility principle. The library has a job that's separate from the interface (the C-interface), which is also separate from your Python script that uses that interface and exposes "the easy functionality" to the user.
Use Boost.Python. Here is my tutorial, previously on SO Docs.
Using Boost.Python
Things are easy when you have to use a C++ library in a Python project. Just you can use Boost.
First of all here is a list of components you need:
A CMakeList.txt file, because you're going to use CMake.
The C++ files of the C++ project.
The python file - this is your python project.
Let's start with a small C++ file. Our C++ project has only one method which returns some string "This is the first try". Call it CppProject.cpp
char const *firstMethod() {
return "This is the first try.";
}
BOOST_PYTHON_MODULE(CppProject) {
boost::python::def("getTryString", firstMethod); // boost::python is the namespace
}
Have a CMakeLists.txt file a below:
cmake_minimum_required(VERSION 2.8.3)
FIND_PACKAGE(PythonInterp)
FIND_PACKAGE(PythonLibs)
FIND_PACKAGE(Boost COMPONENTS python)
INCLUDE_DIRECTORIES(${Boost_INCLUDE_DIRS} ${PYTHON_INCLUDE_DIRS})
PYTHON_ADD_MODULE(NativeLib CppProject)
FILE(COPY MyProject.py DESTINATION .) # See the whole tutorial to understand this line
By this part of the tutorial everything is so easy. you can import the library and call method in your python project. Call your python project MyProject.py.
import NativeLib
print (NativeLib.getTryString)
In order to run your project follow the instructions below:
Create a directory with the name build.
Enter into that directory.
Give the command cmake -DCMAKE_BUILD_TYPE=Release ..
make
python MyProject.py. Now, you have to see the string which the method in your C++ project returns.
Another tool for C++ wrapper generation is CLIF. Released in 2017, Google uses this for most everything these days. We no longer allow new SWIG wrappers to be written for Python internally.
It is built on top of Clang for the C++ parsing and requires relatively idiomatic modern C++ API use (unsurprisingly following Google's Style Guide) rather than any attempt to allow you to shoot yourself in the foot via SWIG's "support everything poorly" approach.
Try with official documentation:
https://docs.python.org/2/extending/extending.html
this link will provide you simple example of how to include a cpp module and use it from the python interpreter, or if this is possible try with Cython: http://cython.org/
Cython will allow you to write C-like, Python-like code which will be translated to CPP compiled and then will be easily accessible from the Python.
You can use Boost.Python
or go with the Python native interface
I would recommend Boost.Python if you already have Boost set up.

Unit test C-generating python code

I have a project which involves some (fairly simple) C-code generation as part of a build system. In essence, I have some version information associated with a project (embedded C) which I want to expose in my binary, so that I can easily determine what firmware version was programmed to a particular device for debugging purposes.
I'm writing some simplistic python tools to do this, and I want to make sure they're thoroughly tested. In general, this has been fairly straightforward, but I'm unsure what the best strategy is for the code-generation portion. Essentially, I want to make sure that the generated files both:
Are syntactically correct
Contain the necessary information
The second, I can (I believe) achieve to a reasonable degree with regex matching. The first, however, is something of a bigger task. I could probably use something like pycparser and examine the resulting AST to accomplish both goals, but that seems like an unnecessarily heavyweight solution.
Edit: A dataflow diagram of my build hierarchy
Thanks for the diagram! Since you are not testing for coverage, if it were me, I would just compile the generated C code and see if it worked :) . You didn't mention your toolchain, but in a Unix-like environment, gcc <whatever build flags> -c generated-file.c || echo 'Oops!' should be sufficient.
Now, it may be that the generated code isn't a freestanding compilation unit. No problem there: write a shim. Example shim.c:
#include <stdio.h>
#include "generated-file.c"
main() {
printf("%s\n", GENERATED_VERSION); //or whatever is in generated-file.c
}
Then gcc -o shim shim.c && diff <(./shim) "name of a file holding the expected output" || echo 'Oops!' should give you a basic test. (The <() is bash process substitution.) The file holding the expected results may already be in your git repo, or you might be able to use your Python routine to write it to disk somewhere.
Edit 2 This approach can work even if your actual toolchain isn't amenable to automation. To test syntactic validity of your code, you can use gcc even if you are using a different compiler for your target processor. For example, compiling with gcc -ansi will disable a number of GNU extensions, which means code that compiles with gcc -ansi is more likely to compile on another compiler than is code that compiles with full-on, GNU-extended gcc. See the gcc page on "C Dialect Options" for all the different flavors you can use (ditto C++).
Edit Incidentally, this is the same approach GNU autoconf uses: write a small test program to disk (autoconf calls it conftest.c), compile it, and see if the compilation succeeded. The test program is (preferably) the bare minimum necessary to test if everything is OK. Depending on how complicated your Python is, you might want to test several different aspects of your generated code with respective, different shims.

Embedding Python into C++

If this question could be worded better/needs to be split into many questions, please alert me
I need to package Python scripts in order to ship them as single-executables (ideally), or single-executables with supporting files (non-ideally).
I have seen py2app and py2exe. They do not fit my requirements, as I am looking for a single method to do this, and in the future may need to have the packaged scripts interact with the executable that they are being run from.
What is the best way to go about this? The scripts which I would be embedding may even require multiple files, which complicates matters I'm sure.
If I wanted to use an interpreter other than CPython (ie: PyPy or Stackless) in the future, would the process be different (other than API calls in the C++ executable)?
Does Python have to be installed on the computers which would be running the package, or does embedding Python mean that it is fully embedded? I saw on the Python Wiki something about Py_SetPythonHome(), which would indicate to me that it needs Python (or at least its libraries) to be installed. Am I correct?
PyInstaller with the --onefile option will put everything inside a single file, though it will take a little more time to startup. I'm afraid it's not compatible with PyPy, but it should not be so tricky to get it working using Stackless.
Good luck!
Are you sure that you need to embed the Python files?
I ask because you mention you want to package the Python files as single executables. Couldn't you install Python on the target machine and since Python scripts are executables on their own, you would only need something to kick them off. A master python script could kick off all the rest of the scripts.
Otherwise you should look in C++ what can run a Python script. Then have the master python script run all the other scripts.
Yes you can!
The easiest is with newest Python3 <OS> <arch> embeddable zip file, which contains .so/.dll and all modules.
Instructions are here: https://docs.python.org/3/extending/embedding.html
#include <Python.h>
int
main(int argc, char *argv[])
{
wchar_t *program = Py_DecodeLocale(argv[0], NULL);
if (program == NULL) {
fprintf(stderr, "Fatal error: cannot decode argv[0]\n");
exit(1);
}
Py_SetProgramName(program); /* optional but recommended */
Py_Initialize();
PyRun_SimpleString("from time import time,ctime\n"
"print('Today is', ctime(time()))\n");
Py_Finalize();
PyMem_RawFree(program);
return 0;
}
The trickier part is communication between your C++ code and Python code.
If you only have to pass a few strings back and forth, that's trivial, but if you want a very tight integration, then you'll probably need a few of these:
create a Python module in (C++ code, C ABI) and inject it -- that means Python will be able to run import yourmod; yourmod.yourfunc(...) which will run your C++ custom code and/or
resolve and call Python functions from C++ (the inverse) https://stackoverflow.com/a/3310608/705086
Single executable
If you cannot have an installer, and really do not wish to have a bunch of .DLL/.so/etc it's described e.g. here Merge DLL into EXE? and it's ABI-specific, there will be one way for Windows, another Linux, etc, and you may have to override Python's import machinery (so that it doesn't look for dll/so in the file system) -- at this point, you may as well use PyInstaller or extend/merge PyInstaller with your application. Please research https://github.com/pyinstaller/pyinstaller/tree/develop/bootloader/src and evaluate how hard that might be
PyPy
Simple embedding is possisble http://doc.pypy.org/en/latest/embedding.html but single-file embedding is not. You could hack PyPy source and build custom embeddable object, but you'd be the first to do so :)
Stackless
Should be as simple as regular CPythion

How to compile a Python package to a dll

Well, I have a Python package. I need to compile it as dll before distribute it in a way easily importable. How? You may suggest that *.pyc. But I read somewhere any *.pyc can be easily decompiled!
Update:
Follow these:
1) I wrote a python package
2) want to distribute it
3) do NOT want distribute the source
4) *.pyc is decompilable >> source can be extracted!
5) dll is standard
Write everything you want to hide in Cython, and compile it to pyd. That's as close as you can get to making compiled python code.
Also, dll is not a standard, not in Python world. They're not portable, either.
Nowadays a simple solutino exists: use Nuitka compiler as described in Nuitka User Manual
Use Case 2 - Extension Module compilation
If you want to compile a single extension module, all you have to do is this:
python -m nuitka --module some_module.py
The resulting file some_module.so can then be used instead of some_module.py.
You need to compile for each platform you want to support and write some initialization code to import so/pyd file ~~appropriate for given platform/python version etc.~~
[EDIT 2021-12]: Actually in python 3 the proper so/dll is determined automatically based on the file name (if it includes python version and platform - can't find PEP for this feature at the moment but Nuitka creates proper names for compiled modules). So for python 2.7 the library name would be something.pyd or something.so whereas for python 3 this would change to something.cp36-win32.pyd or something.cpython-36m-x86_64-linux-gnu.so (for 32bit python 3.6 on x86).
The result is not DLL as requested but Python-native compiled binary format (it is not bytecode like in pyc files; the so/pyd format cannot be easily decompiled - Nuitka compiles to machine code through C++ translation)
EDIT [2020-01]: The compiled module is prone to evaluation methods using python standard mechanisms - e.g. it can be imported as any other module and get its methods listed etc. To secure implementation from being exposed that way there is more work to be done than just compiling to a binary module.
You can use py2exe.org to convert python scripts into windows executables. Granted this will only work on windows, but it's better then nothing.
You can embed python inside C. The real trick is converting between C values and Python values. Once you've done that, though, making a DLL is pretty straightforward.
However, why do you need to make a dll? Do you need to use this from a non-python program?
Python embedding is supported in CFFI version 1.5, you can create a .dll file which can be used by a Windows C application.
I would also using Cython to generate pyd files, like Dikei wrote.
But if you really want to secure your code, you should better write the important stuff in C++. The best would be to combine both C++ and Python. The idea: you would leave the python code open for adjustments, so that you don't have to compile everything over and over again. That means, you would write the "core" in C++ (which is the most secure solution these days) and use those dll files in your python code. It really depends what kind of tool or program you are building and how you want to execute it. I create mostly an execution file (exe,app) once I finish a tool or a program, but this is more for the end user. This could be done with py2exe and py2app (both 64 bit compatible). If you implement the interpreter, the end user's machine doesn't have to have python installed on the system.
A pyd file is the same like a dll and fully supported inside python. So you can normally import your module. You can find more information about it here.
Using and generating pyd files is the fastest and easiest way to create safe and portable python code.
You could also write real dll files in C++ and import them with ctypes to use them (here a good post and here the python description of how it works)
To expand on the answer by Nick ODell
You must be on Windows for DLLs to work, they are not portable.
However the code below is cross platform and all platforms support run-times so this can be re-compiled for each platform you need it to work on.
Python does not (yet) provide an easy tool to create a dll, however you can do it in C/C++
First you will need a compiler (Windows does not have one by default) notably Cygwin, MinGW or Visual Studio.
A basic knowledge of C is also necessary (since we will be coding mainly in C).
You will also need to include the necessary headers, I will skip this so it does not become horribly long, and will assume everything is set up correctly.
For this demonstration I will print a traditional hello world:
Python code we will be converting to a DLL:
def foo(): print("hello world")
C code:
#include "Python.h" // Includes everything to use the Python-C API
int foo(void); // Declare foo
int foo(void) { // Name of our function in our DLL
Py_Initialize(); // Initialise Python
PyRun_SimpleString("print('hello world')"); // Run the Python commands
return 0; // Finish execution
}
Here is the tutorial for embedding Python. There are a few extra things that should be added here, but for brevity I have left those out.
Compile it and you should have a DLL. :)
That is not all. You will need to distribute whatever dependencies are needed, that will mean the python36.dll run-time and some other components to run the Python script.
My C coding is not perfect, so if anyone can spot any improvements please comment and I will do my best to fix the it.
It might also be possible in C# from this answer How do I call a specific Method from a Python Script in C#?, since C# can create DLLs, and you can call Python functions from C#.
You can use pyinstaller for converting the .py files into executable with all required packages into .dll format.
Step 1. pip install pyinstaller,
step 2. new python file let's name it code.py .
step 3. Write some lines of code i.e print("Hello World")
step 4. Open Command Prompt in the same location and write pyinstaller code.py hit enter. Last Step see in the same location two folders name build, dist will be created. inside dist folder there is folder code and inside that folder there is an exe file code.exe along with required .dll files.
If your only goal is to hide your source code, it is much simpler to just compile your code to an executable(use PyInstaller, for example), and use an module with readable source for communication.
NOTE: You might need more converter functions as shown in this example.
Example:
Module:
import subprocess
import codecs
def _encode_str(str):
encoded=str.encode("utf-32","surrogatepass")
return codecs.encode(encoded,"base64").replace(b"\n",b"")
def _decode_str(b64):
return codecs.decode(b64,"base64").decode("utf-32","surrogatepass")
def strlen(s:str):#return length of str;int
proc=subprocess.Popen(["path_to_your_exe.exe","strlen",_encode_str(str).decode("ascii")],stdout=subprocess.PIPE)
return int(proc.stdout.read())
def random_char_from_string(str):
proc=subprocess.Popen(["path_to_your_exe.exe","randchr",_encode_str(str).decode("ascii")],stdout=subprocess.PIPE)
return _decode_str(proc.stdout.read())
Executable:
import sys
import codecs
import random
def _encode_str(str):
encoded=str.encode("utf-32","surrogatepass")
return codecs.encode(encoded,"base64").replace(b"\n",b"")
def _decode_str(b64):
return codecs.decode(b64,"base64").decode("utf-32","surrogatepass")
command=sys.argv[1]
if command=="strlen":
s=_decode_str(sys.argv[2].encode("ascii"))
print(len(str))
if command=="randchr":
s_decode_str(sys.argv[2].encode("ascii"))
print(_encode_str(random.choice(s)).decode("ascii"))
You might also want to think about compiling different executables for different platforms, if your package isn't a windows-only package anyways.
This is my idea, it might work. I don't know, if that work or not.
1.Create your *.py files.
2.Rename them into *.pyx
3.Convert them into *.c files using Cython
4.Compile *.c into *.dll files.
But I don't recommend you because it won't work on any other platforms, except Windows.
Grab Visual Studio Express and IronPython and do it that way? You'll be in Python 2.7.6 world though.

Categories