Testing Embedded Systems with Python - python

I'm fairly new to Python and testing. I'm unable to wrap my head around that embedded systems can be tested with Python.
1) I don't understand how Python is able to communicate with the low level hardware of an embedded system.
2)How does Python communicate with C, so Python can start simulating an environment(starting SPI comm.) and receive information from the embedded system?
3) C is a low level language that is closer to the hardware, so it makes sense to me that we can control the peripherals on an embedded system. Python is a higher language and is abstracted from the hardware, so wouldn't we be unable to control the peripherals?
4)If we utilize a testing framework like Robot framework, then wouldn't we still have to set up some form of communication with the computer and embedded system in Python (maybe use Pyserial)?
Appreciate the help!

I think I know where you come from, as I have been in your same situation. Embedded systems are a great source of "whys" and you can go down the rabbit hole pretty quickly. I'll give you some brief answers with some link to expand your curiosity.
Q0: I don't understand how Python is able to communicate with the low level hardware of an embedded system.
A0: This depends whether you are just communicating with the embedded system from an external OS which runs python, or python is run directly on the embedded system.
In the first case python will open the communication port ( being it serial, USB, bluetooth, TCP etc.. ) and start exchanging information with the system. Of course the end-point must be running something to communicate back to you. The easiest example is an Arduino sending ADC read values over the serial port and your python script reading them. Arduino <-> Python
In the second case an OS capable of running the python interpreter is directly on the embedded system (running embedded linux for instance). Now python can access all the resources of the system to control them, particularly it will have access to the peripherals and memory content. A little advanced - devmem with python
Q1: How does Python communicate with C, so Python can start simulating an environment(starting SPI comm.) and receive information from the embedded system?
A1: This question is very broad. Python can communicates with C in the sense that it can share the memory with compiled C routines, which then can do operations on the python objects, see "Glue it all together".
A second interpretation to your question is how can you run a simulation of an embedded system with python. It is possible to simulate a system with dedicated libraries (from processor vendors), qemu or HDL level (the level of details of what you can see varies greatly). Usually those libraries are written in C and the executable is loaded from python, it exposes functions to "talk" to the system being simulated, acting on stuff which would be difficult to do in the real world. You can switch on and off a button thousands of times, read and write registers on the fly and so on.
Q2: C is a low level language that is closer to the hardware, so it makes sense to me that we can control the peripherals on an embedded system. Python is a higher language and is abstracted from the hardware, so wouldn't we be unable to control the peripherals?
A2: The only situation where this question makes sense is the scenario 2 of Q0, where linux or some other OS is running. While it's true that python is a higher level language than C, most of the time when talking to the peripherals you are using system calls that boils down to the very same functions that would be used by C. If I remember correctly, for instance the serial.open() python function is just a wrapper around (something like) fopen("/dev/tty") C function. Both languages will issue a system call and the OS will do the work by setting up the serial port driver.
Working with bare metal C instead is a different thing, you will be responsible of every little details while talking to buses and peripherals. If you have python you have an OS by definition, hence it's better to leave it deal with these details.
Q3: If we utilize a testing framework like Robot framework, then wouldn't we still have to set up some form of communication with the computer and embedded system in Python (maybe use Pyserial)?
A3: I had a brief look at robot framework and it seems that all the communication is done through HTTP, though you might want to understand the answers 0,1 and 2, then get back here and ask something more clear. Also, robot frameworks seems related to web-testing and has nothing to do with embedded systems.

Related

Raspberry pi protection against reverse engineering the codes

Problem
I want to secure my Raspberry pi in a special way.
I would like to start up raspberry pi without entering a password as pi user.
However, I want pi user to have zero privilege. Cant read a file, cant copy a file, simply nothing. And without access to root->'sudo su'. On the other hand when raspberry pi starts itself with pi user, I want backend process to be running as root. So to put it simply, I want it like in a zoo - two worlds but neither of them can enter the other. Clients can be present, see what process are running, see files in directories, but cant read it, copy, remove and etc. In the same time I want the backend to be untouched and running and writing files.
Reason:
I have raspberri pi product - customer get it home, when plug in power supply,RPi starts up and runs backend programs with root privilege and communicates with my desktop software.
But I dont want curious customer that plugs in HDMI and see my code. I also dont want him to take the SD card and extract the code.
I heard its possible to reverse engineer the code even if compiled. So I simply want the programs(python script) to be there but cannot be accessed in any way.
Is it possible to make such protection?
Thanks in advance
You may consider to use the following approach
Use at least two levels of hashing with MAC address and ARM chip serial number (via cat /proc/cpuinfo) with additional secret keys. Run your program only if the stored license key is the same as the result of doubly-hashed functions.
Optionally, you could rewrite critical part of your code in C, compile it statically, and remove all debug symbols. Call it using Python.
Quick optimization of your code using cython. Call its generated shared objects with a python caller script. Distribute only the shared objects and the python caller script.
This will prevent most people from reverse engineering your codes.
Just to be clear, "cant read a file" means "can't run a program", which means "can't see what process are running, see files in directories".
From your question, I don't understand why you'd even leave the pi user in place...
... runs backend programs with root privilege
Never a good idea - use service accounts instead.
But I dont want curious customer that plugs in HDMI and see my code.
Then don't enable the HDMI output, don't have a graphical desktop installed and disable the login prompt. You might want to look at a "minimal" / "lite" image.
Remember that the UART can present a login prompt, so make sure that's disabled too.
And as the config.txt and kernel need to be in cleartext in the boot partition they can be easily swapped... thus these steps are not going to be terribly effective.
I also dont want him to take the SD card and extract the code.
You could look at encrypting the filesystems (e.g: LUKS), but the Raspberry Pi has no native ability to store data and identify itself... so your encryption key can only be something like the MAC address, or stored in cleartext on the SD card...
Fundamentally this is just going to be a deterrent from casual "oh what's this" investigations.
"Physical access is total access"... once you put it in the customer's hands, you're looking at deterrents more than absolutes.
I heard its possible to reverse engineer the code even if compiled. So I simply want the programs (python script) to be there but cannot be accessed in any way.
Python doesn't get compiled until runtime, so you'll need to ship the device with your source code on it...
If you really want to secure your Intellectual Property, then perhaps the Raspberry Pi isn't the best option? It's up to you to balance cost vs security.

dump1090 offline raspberry pi

I'm currently doing a project in which I'm making an ADS-B flightradar on a led matrix, which is controlled by a Raspberry Pi. I've found a program called dump1090 which receives and decodes the data from my SDR receiver. I can find lots of example on how to use to forward that data to a webserver or whatever, but I can't seem to find anything on how you can programmatically listen to the data dump1090 produces. Does anyone know how you can programmatically receive dump1090's data in order to use the data in a program? (any language would do, but perhaps python would be the most obvious choice)
You should be able to start dump1090 using a programming language of choice (c/c++/java/python/etc.) and and read the std out pipe.
Personally, on Raspberry Pi, I find Python nicer to use since it's easier to test/reiterate without needing to compile. Python provides the subprocess package which allows you run dump1090(or any other application) from within Python and have a look at the output (using subprocess.check_output('dump1090') for example). Have a look at check_output and Popen options to see what works best with your application.

How to port/embed python for arm based devices?

TL; DR: how hard it is to port Python to new OS?
I want to use python to write applications for Verifone's VX 680. They are 32-bit ARM based devices with 128+MB of RAM. http://www.verifone.com/media/4300697/vx680_ds_ltr.pdf
My idea is to write a C application which calls Python interpreter. My application will be a bunch of python modules. The app needs to show graph rich UI, send HTTPS messages, access peripherals (e.g. WiFi radio, PinPad, thermal printer). Despite of my investigation, I'm completely lost still.
What is the list of things I need to address in order to be able to write python applications in this device?
I have personally ported CPython for my own operating system; the real problem was the lack of cross-compiling support really - I found patches for 2.5.1 to make it cross-compile cleanly.
After it was compiling cleanly, I just needed to provide quite a minimal set of system calls to work. For anything serious at least a read-only filesystem is a must. In any case, if your libc is POSIXish, you shouldn't have too many problems to get started.
The set of system calls I had in the beginning was exit, open, close, read (for console and files), write (to file descriptors 1 and 2 only), stat, fstat and sbrk (for changing the heap size). I used the newlibc C library with libgloss - everything that wasn't mapped to these, returned just the error values or defaults.

How can I access Ring 0 with Python?

This answer, stating that the naming of classes in Python is not done because of special privileges, here confuses me.
How can I access lower rings in Python?
Is the low-level io for accessing lower level rings?
If it is, which rings I can access with that?
Is the statement "This function is intended for low-level I/O." referring to lower level rings or to something else?
C tends to be prominent language in os -programming. When there is the OS -class in Python, does it mean that I can access C -code through that class?
Suppose I am playing with bizarre machine-language code and I want to somehow understand what it means. Are there some tools in Python which I can use to analyze such things? If there is not, is there some way that I could still use Python to control some tool which controls the bizarre machine language? [ctypes suggested in comments]
If Python has nothing to do with the low-level privileged stuff, do it still offers some wrappers to control the privileged?
Windows and Linux both use ring 0 for kernel code and ring 3 for user processes. The advantage of this is that user processes can be isolated from one another, so the system continues to run even if a process crashes. By contrast, a bug in ring 0 code can potentially crash the entire machine.
One of the reasons ring 0 code is so critical is that it can access hardware directly. By contrast, when a user-mode (ring 3) process needs to read some data from a disk:
the process executes a special instruction telling the CPU it wants to make a system call
CPU switches to ring 0 and starts executing kernel code
kernel checks that the process is allowed to perform the operation
if permitted, the operation is carried out
kernel tells the CPU it has finished
CPU switches back to ring 3 and returns control to the process
Processes belonging to "privileged" users (e.g. root/Administrator) run in ring 3 just like any other user-mode code; the only difference is that the check at step 3 always succeeds. This is a good thing because:
root-owned processes can crash without taking the entire system down
many user-mode features are unavailable in the kernel, e.g. swappable memory, private address space
As for running Python code in lower rings - kernel-mode is a very different environment, and the Python interpreter simply isn't designed to run in it, e.g. the procedure for allocating memory is completely different.
In the other question you reference, both os.open() and open() end up making the open() system call, which checks whether the process is allowed to open the corresponding file and performs the actual operation.
I think SimonJ's answer is very good, but I'm going to post my own because from your comments it appears you're not quite understanding things.
Firstly, when you boot an operating system, what you're doing is loading the kernel into memory and saying "start executing at address X". The kernel, that code, is essentially just a program, but of course nothing else is loaded, so if it wants to do anything it has to know the exact commands for the specific hardware it has attached to it.
You don't have to run a kernel. If you know how to control all the attached hardware, you don't need one, in fact. However, it was rapidly realised way back when that there are many types of hardware one might face and having an identical interface across systems to program against would make code portable and generally help get things done faster.
So the function of the kernel, then, is to control all the hardware attached to the system and present it in a common interface, called an API (application programming interface). Code for programs that run on the system don't talk directly to hardware. They talk to the kernel. So user land programs don't need to know how to ask a specific hard disk to read sector 0x213E or whatever, but the kernel does.
Now, the description of ring 3 provided in SimonJ's answer is how userland is implemented - with isolated, unprivileged processes with virtual private address spaces that cannot interfere with each other, for the benefits he describes.
There's also another level of complexity in here, namely the concept of permissions. Most operating systems have some form of access control, whereby "administrators" have total control of the system and "users" have a restricted subset of options. So a kernel request to open a file belonging to an administrator should fail under this sort of approach. The user who runs the program forms part of the program's context, if you like, and what the program can do is constrained by what that user can do.
Most of what you could ever want to achieve (unless your intention is to write a kernel) can be done in userland as the root/administrator user, where the kernel does not deny any API requests made to it. It's still a userland program. It's still a ring 3 program. But for most (nearly all) uses it is sufficient. A lot can be achieved as a non-root/administrative user.
That applies to the python interpreter and by extension all python code running on that interpreter.
Let's deal with some uncertainties:
The naming of os and sys I think is because these are "systems" tasks (as opposed to say urllib2). They give you ways to manipulate and open files, for example. However, these go through the python interpreter which in turn makes a call to the kernel.
I do not know of any kernel-mode python implementations. Therefore to my knowledge there is no way to write code in python that will run in the kernel (linux/windows).
There are two types of privileged: privileged in terms of hardware access and privileged in terms of the access control system provided by the kernel. Python can be run as root/an administrator (indeed on Linux many of the administration gui tools are written in python), so in a sense it can access privileged code.
Writing a C extension or controlling a C application to Python would ostensibly mean you are either using code added to the interpreter (userland) or controlling another userland application. However, if you wrote a kernel module in C (Linux) or a Driver in C (Windows) it would be possible to load that driver and interact with it via the kernel APIs from python. An example might be creating a /proc entry in C and then having your python application pass messages via read/write to that /proc entry (which the kernel module would have to handle via a write/read handler. Essentially, you write the code you want to run in kernel space and basically add/extend the kernel API in one of many ways so that your program can interact with that code.
"Low-level" IO means having more control over the type of IO that takes place and how you get that data from the operating system. It is low level compared to higher level functions still in Python that give you easier ways to read files (convenience at the cost of control). It is comparable to the difference between read() calls and fread() or fscanf() in C.
Health warning: Writing kernel modules, if you get it wrong, will at best result in that module not being properly loaded; at worst your system will panic/bluescreen and you'll have to reboot.
The final point about machine instructions I cannot answer here. It's a totally separate question and it depends. There are many tools capable of analysing code like that I'm sure, but I'm not a reverse engineer. However, I do know that many of these tools (gdb, valgrind) e.g. tools that hook into binary code do not need kernel modules to do their work.
You can use inpout library http://logix4u.net/parallel-port/index.php
import ctypes
#Example of strobing data out with nStrobe pin (note - inverted)
#Get 50kbaud without the read, 30kbaud with
read = []
for n in range(4):
ctypes.windll.inpout32.Out32(0x37a, 1)
ctypes.windll.inpout32.Out32(0x378, n)
read.append(ctypes.windll.inpout32.Inp32(0x378)) #Dummy read to see what is going on
ctypes.windll.inpout32.Out32(0x37a, 0)
print read
[note: I was wrong. usermode code can no longer access ring 0 on modern unix systems. -- jc 2019-01-17]
I've forgotten what little I ever knew about Windows privileges. In all Unix systems with which I'm familiar, the root user can access all ring0 privileges. But I can't think of any mapping of Python modules with privilege rings.
That is, the 'os' and 'sys' modules don't give you any special privileges. You have them, or not, due to your login credentials.
How can I access lower rings in Python?
ctypes
Is the low-level io for accessing lower level rings?
No.
Is the statement "This function is intended for low-level I/O." referring to lower level rings or to something else?
Something else.
C tends to be prominent language in os -programming. When there is the OS -class in Python, does it mean that I can access C -code through that class?
All of CPython is implemented in C.
The os module (it's not a class, it's a module) is for accessing OS API's. C has nothing to do with access to OS API's. Python accesses the API's "directly".
Suppose I am playing with bizarre machine-language code and I want to somehow understand what it means. Are there some tools in Python which I can use to analyze such things?
"playing with"?
"understand what it means"? is your problem. You read the code, you understand it. Whether or not Python can help is impossible to say. What don't you understand?
If there is not, is there some way that I could still use Python to control some tool which controls the bizarre machine language? [ctypes suggested in comments]
ctypes
If Python has nothing to do with the low-level privileged stuff, do it still offers some wrappers to control the privileged?
You don't "wrap" things to control privileges.
Most OS's work like this.
You grant privileges to a user account.
The OS API's check the privileges granted to the user making the OS API request.
If the user has the privileges, the OS API works.
If the user lacks the privileges, the OS API raises an exception.
That's all there is to it.

python IPC (Inter Process Communication) for Vista UAC (User Access Control)

I am writing a Filemanager in (wx)python - a lot already works. When copying files there is already a progress dialog, overwrite handling etc.
Now in Vista when the user wants to copy a file to certain directories (eg %Program Files%) the application/script needs elevation, which cannot be asked for at runtime. So i have to start another app/script elevated, which does the work, but needs to communicate with the main app, so latter can update the progress etc.
I searched and found a lot of articles saying shared memory and pipes are the easiest way. So what i am looking for is a 'high level' platform independent ipc library whith python bindings using shared mem or pipes.
I already found ominORB, fnorb, etc. They look very interesting, but use TCP/IP, is there an equivalent lib using shared mem or pipes ? Since the ipc-client is always on the same machine sockets seems not to be neccesary here. And i am also afraid the user would have to allow ipc-socket-communications on his/her personal firewall.
EDIT: I really mean high level: it would be great to be able to just call some functions like when using omniORB instead of sending strings to stdin/stdout.
How about just communicating with the second process using stdin/stdout?
There are some caveats due to input and output buffering, but take a look at this Python Cookbook recipe, and also Pexpect, for ideas on how to do this.

Categories