Using a USB flash drive with Raspberry Pi/Linux [duplicate] - python

I'm writing a python module for a device that interacts with a user supplied USB memory stick. The user can insert a USB memory stick in the device USB slot, and the device will dump data onto the memory stick without user intervention. If the device is running when the user inserts the USB stick, I have hooked into D-Bus and have an auto mount routine all worked out. The new issue is, what if the stick is inserted while the device is powered off? I get no D-Bus insertion event, or any the associated nuggets of information about the memory stick after the device is powered on.
I have worked out a way to derive the device node ( /dev/sd? ) from scanning the USB devices in /proc, by calling:
ls /proc/scsi/usb-storage
this gives the scsi device info if you cat each of the files in that folder.
I then take the Vendor, Product, and Serial Number fields from the usb-storage records, generate an identifier string that I then use in
ll /dev/disc/by-id/usb_[vendor]_[product]_[serial_number]-0:0
So I can parse through the result to get the relative path
../../sdc
Then, I can mount the USB stick.
This is a cumbersome procedure, pretty much all text based, and ready for bugs when someone introduces a weird character, or non-standard serial number string. It works with all 2 of the USB memory sticks I own. I have tried to map output from /var/log/messages but that ends up being text comparisons as well. Output from lsusb, fdisk, udevinfo, lsmod, and others only show half of the required data.
My question: how do I determine, in the absence of a D-Bus message, the /dev device assigned to a USB memory stick without user intervention, or knowing in advance the specifics of the inserted device?
Thanks, sorry about the novel.

This seems to work combining /proc/partitions and the /sys/class/block approach ephimient took.
#!/usr/bin/python
import os
partitionsFile = open("/proc/partitions")
lines = partitionsFile.readlines()[2:]#Skips the header lines
for line in lines:
words = [x.strip() for x in line.split()]
minorNumber = int(words[1])
deviceName = words[3]
if minorNumber % 16 == 0:
path = "/sys/class/block/" + deviceName
if os.path.islink(path):
if os.path.realpath(path).find("/usb") > 0:
print "/dev/%s" % deviceName
I'm not sure how portable or reliable this is, but it works for my USB stick. Of course find("/usb") could be made into a more rigorous regular expression. Doing mod 16 may also not be the best approach to find the disk itself and filter out the partitions, but it works for me so far.

I'm not entirely certain how portable this is. Also, this information would presumably also be available over D-Bus from udisks or HAL but neither of those is present on my system so I can't try. It seems to be reasonably accurate here regardless:
$ for i in /sys/class/block/*; do
> /sbin/udevadm info -a -p $i | grep -qx ' SUBSYSTEMS=="usb"' &&
> echo ${i##*/}
> done
sde
sdf
sdg
sdh
sdi
sdj
sdj1
$ cd /sys/class/block/
$ for i in *; do [[ $(cd $i; pwd -P) = */usb*/* ]] && echo $i; done
sde
sdf
sdg
sdh
sdi
sdj
sdj1

After looking at this thread about doing what ubuntu does with nautilus, i found a few recommendations and decided to go with accessing udisks through shell commands.
The Mass storage device class is what you want. Just give it the device file. ie: /dev/sdb
you can then do d.mount() and d.mount_point to get where it has been mounted.
After that is also a class for finding many identical USB devices to control mounting, un-mounting and ejecting a large list of devices that all have the same label.
(if you run is with no argument, it will apply this to all SD devices. Could be handy for a "just auto mount everything" script
import re
import subprocess
#used as a quick way to handle shell commands
def getFromShell_raw(command):
p = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
return p.stdout.readlines()
def getFromShell(command):
result = getFromShell_raw(command)
for i in range(len(result)):
result[i] = result[i].strip() # strip out white space
return result
class Mass_storage_device(object):
def __init__(self, device_file):
self.device_file = device_file
self.mount_point = None
def as_string(self):
return "%s -> %s" % (self.device_file, self.mount_point)
""" check if we are already mounted"""
def is_mounted(self):
result = getFromShell('mount | grep %s' % self.device_file)
if result:
dev, on, self.mount_point, null = result[0].split(' ', 3)
return True
return False
""" If not mounted, attempt to mount """
def mount(self):
if not self.is_mounted():
result = getFromShell('udisks --mount %s' % self.device_file)[0] #print result
if re.match('^Mounted',result):
mounted, dev, at, self.mount_point = result.split(' ')
return self.mount_point
def unmount(self):
if self.is_mounted():
result = getFromShell('udisks --unmount %s' % self.device_file) #print result
self.mount_point=None
def eject(self):
if self.is_mounted():
self.unmount()
result = getFromShell('udisks --eject %s' % self.device_file) #print result
self.mount_point=None
class Mass_storage_management(object):
def __init__(self, label=None):
self.label = label
self.devices = []
self.devices_with_label(label=label)
def refresh(self):
self.devices_with_label(self.label)
""" Uses udisks to retrieve a raw list of all the /dev/sd* devices """
def get_sd_list(self):
devices = []
for d in getFromShell('udisks --enumerate-device-files'):
if re.match('^/dev/sd.$',d):
devices.append(Mass_storage_device(device_file=d))
return devices
""" takes a list of devices and uses udisks --show-info
to find their labels, then returns a filtered list"""
def devices_with_label(self, label=None):
self.devices = []
for d in self.get_sd_list():
if label is None:
self.devices.append(d)
else:
match_string = 'label:\s+%s' % (label)
for info in getFromShell('udisks --show-info %s' % d.device_file):
if re.match(match_string,info): self.devices.append(d)
return self
def as_string(self):
string = ""
for d in self.devices:
string+=d.as_string()+'\n'
return string
def mount_all(self):
for d in self.devices: d.mount()
def unmount_all(self):
for d in self.devices: d.unmount()
def eject_all(self):
for d in self.devices: d.eject()
self.devices = []
if __name__ == '__main__':
name = 'my devices'
m = Mass_storage_management(name)
print m.as_string()
print "mounting"
m.mount_all()
print m.as_string()
print "un mounting"
m.unmount_all()
print m.as_string()
print "ejecting"
m.eject_all()
print m.as_string()

why don't you simply use an udev rule? i had to deal with a similar situation, and my solution was to create a file in /etc/udev/rules.d containing following rule:
SUBSYSTEMS=="scsi", KERNEL=="sd[b-h]1", RUN+="/bin/mount -o umask=000 /dev/%k /media/usbdrive"
one assumption here is that nobody ever inserts more than one usb stick at time. it has however the advantage that i know in advance where the stick will be mounted (/media/usbdrive).
you can quite surely elaborate it a bit to make it smarter, but personally i never had to change it and it still works on several computers.
however, as i understand, you want to be alerted somehow when a stick is inserted, and perhaps this strategy gives you some trouble on that side, i don't know, didn't investigate...

I think the easiest way is to use lsblk:
lsblk -d -o NAME,TRAN | grep usb

Related

Edit wpa_supplicant.conf with python and raspbian

Willing to add wifi connections from a kivy app and I'm using a simple function to edit wpa_supplicant.conf file adding there new networks.
My function correctly writes the configuration there and seems identical to the configs aded through raspbian GUI...
But when I reboot the raspberry, it says no networks interfaces were found but it get solved if I delete the last added lines from the wpa_supplicant.conf file. For some reasons raspbian fails to read properly this file after the edit, but I don't see what I'm doing wrong here which defers from the default configurations.
Hope someone can provide me some hint... I run the script as sudo, so can't be any permission issues, I tried to look into any diferences in the way I write the config and the config provided by raspbian, but no clue...
Here you can see the code:
def CreateWifiConfig(SSID, password):
config = (
'\nnetwork={{\n' +
'\tssid="{}"\n' +
'\tpsk="{}"\n' + '}}').format(SSID, password)
print(config)
with open("/etc/wpa_supplicant/wpa_supplicant.conf", "a+") as wifi:
wifi.write(config)
wifi.close()
print("Wifi config added")```
This is just a drive-by comment - I've not run your code, I'm just interpreting it based on what I'm reading - but a couple of things strike me here:
1) I don't see a key_mgmt value in your config. Usually this is like WPA-PSK, but you can see some addtional possible values here
2) Because you're using the "with" statement, the file should be automatically closed for you (you shouldn't need to call close() on it a second time).
3) The way you're constructing the config string looks a little strange to me, how about something like this (totally subjective, just a bit more readable to my eyes):
def CreateWifiConfig(SSID, password):
config_lines = [
'\n',
'network={',
'\tssid="{}"'.format(SSID),
'\tpsk="{}"'.format(password),
'\tkey_mgmt=WPA-PSK',
'}'
]
config = '\n'.join(config_lines)
print(config)
with open("/etc/wpa_supplicant/wpa_supplicant.conf", "a+") as wifi:
wifi.write(config)
print("Wifi config added")
4) The other thing to note here is that this code is not idempotent - you'll not be able to run this multiple times without compromising the integrity of the file content. You're probably aware of that already and may not especially care, but just pointing it out anyway.
Hopefully the missing key_mgmt is all it is.
this is my solution to add or update wifi.
The json argument in the method is a dictionary with value:
{"ssid": "wifi_name", "psk": "password"}. This script can be easily modified to add id_str or priority to a networks.
def command_add_wifi(json):
print("-> Adding wifi")
# Read file WPA suppliant
networks = []
with open("/etc/wpa_supplicant/wpa_supplicant.conf", "r") as f:
in_lines = f.readlines()
# Discover networks
out_lines = []
networks = []
i = 0
isInside = False
for line in in_lines:
if "network={" == line.strip().replace(" ", ""):
networks.append({})
isInside = True
elif "}" == line.strip().replace(" ", ""):
i += 1
isInside = False
elif isInside:
key_value = line.strip().split("=")
networks[i][key_value[0]] = key_value[1]
else:
out_lines.append(line)
# Update password or add new
isFound = False
for network in networks:
if network["ssid"] == f"\"{json['ssid']}\"":
network["psk"] = f"\"{json['psk']}\""
isFound = True
break
if not isFound:
networks.append({
'ssid': f"\"{json['ssid']}\"",
'psk': f"\"{json['psk']}\"",
'key_mgmt': "WPA-PSK"
})
# Generate new WPA Supplicant
for network in networks:
out_lines.append("network={\n")
for key, value in network.items():
out_lines.append(f" {key}={value}\n")
out_lines.append("}\n\n")
# Write to WPA Supplicant
with open('/etc/wpa_supplicant/wpa_supplicant.conf', 'w') as f:
for line in out_lines:
f.write(line)
print("-> Wifi added !")
I tried rewriting the 'wpa_supplicant.conf' file using a Python script and the whole Wi-Fi system just broke down, and I have to reformat and reinstall Raspbian OS on the SD card all over again. Creating a new 'wpa_supplicant.conf' file in the boot drive of the SD card doesn't work either. Luckily, I found a solution to connect to a new network using a Python script, but before that, you'll need to follow these steps first.
Solution: Instead of rewriting 'wpa_supplicant.conf' using with open('/etc/wpa_supplicant/wpa_supplicant.conf', 'w') as file:, rewrite the 'interfaces' file from the /etc/network/interfaces path.
Add this code below the source-directory /etc/network/interfaces.d line of the 'interfaces' file:
auto wlan0
iface wlan0 inet dhcp
wpa-ssid "WIFI NETWORK NAME HERE"
wpa-psk "WIFI NETWORK PASSWORD HERE"
Save the 'interfaces' file and open the 'wpa_supplicant.conf' file.
Add update_config=1 between the lines of ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev and network={.... From my observation, this kinda ignores the contents of network={... and divert its retrieval of Wi-Fi information to the /etc/network/interfaces file.
Save the file and reboot.
Once rebooted, you can rewrite the /etc/network/interfaces file by executing a Python script whenever you want to change to a new network (the script must lead to a system reboot at the end in order for the new Wi-Fi changes to take effect). (I don't know whether the 'interfaces' file allows multiple networks to be added into, so far I only tested with one network at a time, since this has already fulfilled my project goal.) Here's a Python code sample to change the network information in the 'interfaces' file:
import os
import time
SSID = input('Enter SSID: ')
password = input("Enter password: ")
with open('/etc/network/interfaces', 'w') as file:
content = \
'# interfaces(5) file used by ifup(8) and ifdown(8)\n\n' + \
'# Please note that this file is written to be used with dhcpcd\n' + \
"# For static IP, consult /etc/dhcpcd.conf and 'man dhcpcd.conf'\n" + \
'# Include files from /etc/network/interfaces.d:\n' + \
'source-directory /etc/network/interfaces.d\n' + \
'auto wlan0\n' + \
'iface wlan0 inet dhcp\n' + \
'wpa-ssid "' + SSID + '"\n' + \
'wpa-psk "' + password + '"\n'
file.write(content)
print("Write successful. Rebooting now.")
time.sleep(2)
os.system('sudo reboot now')

How to find a unassigned drive letter on windows with python

I needed to find a free drive letter on windows from a python script. Free stands for not assigned to any physically or remote device.
I did some research and found a solution here on stackoverflow (cant remember the exact link):
# for python 2.7
import string
import win32api
def getfreedriveletter():
""" Find first free drive letter """
assigneddrives = win32api.GetLogicalDriveStrings().split('\000')[:-1]
assigneddrives = [item.rstrip(':\\').lower() for item in assigneddrives]
for driveletter in list(string.ascii_lowercase[2:]):
if not driveletter in assigneddrives:
return driveletter.upper() + ':'
This works fine for all physically drives and connected network drives. But not for currently disconnected drives.
How can I get all used drive letter, also the temporary not used ones?
Creating a child process is relatively expensive, and parsing free-form text output isn't the most reliable technique. You can instead use PyWin32 to call the same API functions that net use calls.
import string
import win32api
import win32wnet
import win32netcon
def get_free_drive():
drives = set(string.ascii_uppercase[2:])
for d in win32api.GetLogicalDriveStrings().split(':\\\x00'):
drives.discard(d)
# Discard persistent network drives, even if not connected.
henum = win32wnet.WNetOpenEnum(win32netcon.RESOURCE_REMEMBERED,
win32netcon.RESOURCETYPE_DISK, 0, None)
while True:
result = win32wnet.WNetEnumResource(henum)
if not result:
break
for r in result:
if len(r.lpLocalName) == 2 and r.lpLocalName[1] == ':':
drives.discard(r.lpLocalName[0])
if drives:
return sorted(drives)[-1] + ':'
Note that this function returns the last available drive letter. It's a common practice to assign mapped and substitute drives (e.g. from net.exe and subst.exe) from the end of the list and local system drives from the beginning.
As i will pass the found letter to an external script which will run the Winshell cmd 'subst /d letter'. I must not pass a currently not mounted drive, as it will remove the network-drive mapping.
The only way I found, was the result of the winshellcmd 'net use' to find unavailable drives.
Here is my solution, if you have a better way, please share it with me:
# for python 2.7
import string
import win32api
from subprocess import Popen, PIPE
def _getnetdrives():
""" As _getfreedriveletter can not find unconnected network drives
get these drives with shell cmd 'net use' """
callstr = 'net use'
phandle = Popen(callstr, stdout=PIPE)
presult = phandle.communicate()
stdout = presult[0]
# _stderr = presult[1]
networkdriveletters = []
for line in stdout.split('\n'):
if ': ' in line:
networkdriveletters.append(line.split()[1] + '\\')
return networkdriveletters
def getfreedriveletter():
""" Find first free drive letter """
assigneddrives = win32api.GetLogicalDriveStrings().split('\000')[:-1]
assigneddrives = assigneddrives + _getnetdrives()
assigneddrives = [item.rstrip(':\\').lower() for item in assigneddrives]
for driveletter in list(string.ascii_lowercase[2:]): #array starts from 'c' as i dont want a and b drive
if not driveletter in assigneddrives:
return driveletter.upper() + ':'

What's the best way to implement an SCPI command tree structure as Python class methods?

SCPI commands are strings built of mnemonics that are sent to an instrument to modify/retrieve its settings and read measurements. I'd like to be able to build and send a string like this: "SENSe:VOLTage:DC:RANGe 10 V"
With code like this:
inst.sense.voltage.dc.range('10 V')
But it seems like a very deep rabbit hole and I'm not sure I want to start down it. Loads of classes for each subsystem and option....
Would a better approach be something like:
def sense_volt(self, currenttype='DC', RANG='10 V'):
cmdStr = 'SENS:VOLT:' + currenttype + ':RANG: ' + RANG
return self.inst.write(cmdStr)
It's trivial to just implement the commands I need and leave inst.query(arg) and inst.write(arg) methods for manually building additional commands, but I'd like to eventually have a complete instrument interface where all commands are covered by autocompleteable methods.
I managed it, though I'm pretty unhappy with the results. It's been three years since I touched any of this, so I'm not sure why I did some things the way I did. I know I had a lot of trouble wrapping my head around organizing everything for portability. I'm sure that now, after a long wait, I'll be inundated with suggestions for how I should have done it.
I wrote parsing functions to interpret SCPI commands. They can separate the command from the arguments, determine required and optional arguments, recognize queries, etc. For example:
def command_name(scpi_command):
cmd = scpi_command.split(' ')[0]
new = ''
for i, c in enumerate(cmd):
if c in '[]':
continue
if c.isupper() or c.isdigit():
new += c.lower()
continue
if c == ':':
new += '_'
continue
if c == '?':
new += '_qry'
break
return new
The command from CONFigure[:VOLTage]:DC [{<range>|AUTO|MIN|MAX|DEF} [, {<resolution>|MIN|MAX|DEF}]] can be parsed as conf_volt_dc, or conf_dc. I did not use the abridged option in my experiments. I think some of my dissatisfaction stemmed from the extra-long method names.
I wrote a builder script to read the commands from a file, parse them, and write a new script with a "command set" class extending a "command handler." Every command is parsed and converted into one of a few boilerplate methods, such as:
query_str_no_args = r'''
def {name}(self):
"""SCPI instrument query.
{s}
"""
cmd = '{s}'
return self._command_handler(command=cmd)
'''
The parser also includes a command handler class to process commands and arguments. Each command is parsed from its example text, ala >>> _command_handler(0.1, 'MAX', command='CONFigure[:VOLTage]:DC [{<range>|AUTO|MIN|MAX|DEF} [, {<resolution>|MIN|MAX|DEF}]]'. Arguments (if any) are validated, and the command string is rebuilt and sent to the instrument,
class Cmd_Handler():
def _command_handler(self, *args, command):
# command string 'SENS:VOLT:RANG'
cmd_str = command_string(command)
# argument dictionary {0: [True, '<range>, 'AUTO', 'MIN', 'MAX', 'DEF'],
# 1: [False, '<resolution>', 'MIN', 'MAX', 'DEF']}
arg_dict = command_args(command)
if debug_mode: print(arg_dict)
for k, v in arg_dict.items():
for i, j in enumerate(v[1:]):
...
# Validate arguments
# count mandatory arguments
if len(args) < len([arg_dict[k] for k in arg_dict.keys() if arg_dict[k][0]]):
...
if '?' in cmd_str:
return self._query(cmd_str + arg_str)
else:
return self._write(cmd_str + arg_str)
When the builder is done, you end up with a big script (the Ag34401 and Ag34461 files were 2600 and 3600 lines, respectively) that looks like this:
#!/usr/bin/env python3
from scpi.scpi_parse import Cmd_Handler
class Ag34461A_CS(Cmd_Handler):
...
def calc_scal_stat(self, *args):
"""SCPI instrument command.
CALCulate:SCALe[:STATe] {OFF|ON}
"""
cmd = 'CALCulate:SCALe[:STATe] {OFF|ON}'
return self._command_handler(*args, command=cmd)
def calc_scal_stat_qry(self):
"""SCPI instrument query.
CALCulate:SCALe[:STATe]?
"""
cmd = 'CALCulate:SCALe[:STATe]?'
return self._command_handler(command=cmd)
Then you extend this class with your short, sweet instrument class:
#!/usr/bin/env python3
import pyvisa as visa
from scpi.cmd_sets.ag34401a_cs import Ag34401A_CS # Extend the command set
class Ag34401A(Ag34401A_CS):
def __init__(self, resource_name=None):
rm = visa.ResourceManager()
self._inst = None
if resource_name:
inst = rm.open_resource(resource_name)
else:
for resource in rm.list_resources():
inst = rm.open_resource(resource)
if '34401' in inst.query("*IDN?"):
self._inst = inst
break
if not self._inst:
raise visa.errors.VisaIOError
assert isinstance(self._inst, visa.resources.GPIBInstrument)
self._query = self._inst.query
self._write = self._inst.write
I also have a SCPI instrument parent class that I was trying to fill with all of the star commands (*IDN?, *CLS, etc.) I wasn't happy with the previous results and was intimidated by some of the star commands' mounting complexity (some of which seemed really instrument-specific), and abandoned it. Here's the first bit
class SCPI_Instrument(object, metaclass=ABCMeta):
#abstractmethod
def query(self, message, delay):
pass
#abstractmethod
def write(self, message, delay):
pass
# *CLS - Clear Status
def cls(self, termination = None, encoding = None):
"""Clears the instrument status byte by emptying the error queue and clearing all event registers.
Also cancels any preceding *OPC command or query."""
return self.write("*CLS", termination, encoding)
In the end, the documentation I was able to squeeze into it wasn't as helpful as I wanted. The autocomplete wasn't able to prompt for arguments by name, and having every single command suggested was less helpful than I expected. Trying to keep the import paths straight was difficult enough for me, much less someone who might end up using my code.
Ultimately, coding has never been part of my job description, and I was moved to a position where experimenting with instrument control is even less possible.

Pipsta Printer and Printing a list

I'm trying to modify the simple python script provided with my Pipsta Printer so that instead of printing a single line of text, it prints out a list of things.
I know there are probably better ways to do this, but using the script below, could somebody please tell me what changes I need to make around the "txt = " part of the script, so that I can print out a list of items and not just one item?
Thanks.
# BasicPrint.py
# Copyright (c) 2014 Able Systems Limited. All rights reserved.
'''This simple code example is provided as-is, and is for demonstration
purposes only. Able Systems takes no responsibility for any system
implementations based on this code.
This very simple python script establishes USB communication with the Pipsta
printer sends a simple text string to the printer.
Copyright (c) 2014 Able Systems Limited. All rights reserved.
'''
import argparse
import platform
import sys
import time
import usb.core
import usb.util
FEED_PAST_CUTTER = b'\n' * 5
USB_BUSY = 66
# NOTE: The following section establishes communication to the Pipsta printer
# via USB. YOU DO NOT NEED TO UNDERSTAND THIS SECTION TO PROGRESS WITH THE
# TUTORIALS! ALTERING THIS SECTION IN ANY WAY CAN CAUSE A FAILURE TO COMMUNICATE
# WITH THE PIPSTA. If you are interested in learning about what is happening
# herein, please look at the following references:
#
# PyUSB: http://sourceforge.net/apps/trac/pyusb/
# ...which is a wrapper for...
# LibUSB: http://www.libusb.org/
#
# For full help on PyUSB, at the IDLE prompt, type:
# >>> import usb
# >>> help(usb)
# 'Deeper' help can be trawled by (e.g.):
# >>> help(usb.core)
#
# or at the Linux prompt, type:
# pydoc usb
# pydoc usb.core
PIPSTA_USB_VENDOR_ID = 0x0483
PIPSTA_USB_PRODUCT_ID = 0xA053
def parse_arguments():
'''Parse the arguments passed to the script looking for a font file name
and a text string to print. If either are mssing defaults are used.
'''
txt = 'Hello World from Pipsta!'
parser = argparse.ArgumentParser()
parser.add_argument('text', help='the text to print',
nargs='*', default=txt.split())
args = parser.parse_args()
return ' '.join(args.text)
def main():
"""The main loop of the application. Wrapping the code in a function
prevents it being executed when various tools import the code.
"""
if platform.system() != 'Linux':
sys.exit('This script has only been written for Linux')
# Find the Pipsta's specific Vendor ID and Product ID
dev = usb.core.find(idVendor=PIPSTA_USB_VENDOR_ID,
idProduct=PIPSTA_USB_PRODUCT_ID)
if dev is None: # if no such device is connected...
raise IOError('Printer not found') # ...report error
try:
# Linux requires USB devices to be reset before configuring, may not be
# required on other operating systems.
dev.reset()
# Initialisation. Passing no arguments sets the configuration to the
# currently active configuration.
dev.set_configuration()
except usb.core.USBError as ex:
raise IOError('Failed to configure the printer', ex)
# The following steps get an 'Endpoint instance'. It uses
# PyUSB's versatile find_descriptor functionality to claim
# the interface and get a handle to the endpoint
# An introduction to this (forming the basis of the code below)
# can be found at:
cfg = dev.get_active_configuration() # Get a handle to the active interface
interface_number = cfg[(0, 0)].bInterfaceNumber
# added to silence Linux complaint about unclaimed interface, it should be
# release automatically
usb.util.claim_interface(dev, interface_number)
alternate_setting = usb.control.get_interface(dev, interface_number)
interface = usb.util.find_descriptor(
cfg, bInterfaceNumber=interface_number,
bAlternateSetting=alternate_setting)
usb_endpoint = usb.util.find_descriptor(
interface,
custom_match=lambda e:
usb.util.endpoint_direction(e.bEndpointAddress) ==
usb.util.ENDPOINT_OUT
)
if usb_endpoint is None: # check we have a real endpoint handle
raise IOError("Could not find an endpoint to print to")
# Now that the USB endpoint is open, we can start to send data to the
# printer.
# The following opens the text_file, by using the 'with' statemnent there is
# no need to close the text_file manually. This method ensures that the
# close is called in all situation (including unhandled exceptions).
txt = parse_arguments()
usb_endpoint.write(b'\x1b!\x00')
# Print a char at a time and check the printers buffer isn't full
for x in txt:
usb_endpoint.write(x) # write all the data to the USB OUT endpoint
res = dev.ctrl_transfer(0xC0, 0x0E, 0x020E, 0, 2)
while res[0] == USB_BUSY:
time.sleep(0.01)
res = dev.ctrl_transfer(0xC0, 0x0E, 0x020E, 0, 2)
usb_endpoint.write(FEED_PAST_CUTTER)
usb.util.dispose_resources(dev)
# Ensure that BasicPrint is ran in a stand-alone fashion (as intended) and not
# imported as a module. Prevents accidental execution of code.
if __name__ == '__main__':
main()
Pipsta uses linux line-feed ('\n', 0x0A, 10 in decimal) to mark a new line. For a quick test change BasicPrint.py as follows:
#txt = parse_arguments()
txt = 'Recipt:\n========\n1. food 1 - 100$\n2. drink 1 - 200$\n\n\n\n\n'
usb_endpoint.write(b'\x1b!\x00')
for x in txt:
usb_endpoint.write(x)
res = dev.ctrl_transfer(0xC0, 0x0E, 0x020E, 0, 2)
while res[0] == USB_BUSY:
time.sleep(0.01)
res = dev.ctrl_transfer(0xC0, 0x0E, 0x020E, 0, 2)
I commented out parameter parsing and injected a test string with multi line content.

How do I distribute my task defined in example.py to two client computers using mincemeat?

I've downloaded the mincemeat.py with example from https://github.com/michaelfairley/mincemeatpy/zipball/v0.1.2
example.py as follows:
#!/usr/bin/env python
import mincemeat
data = ["Humpty Dumpty sat on a wall",
"Humpty Dumpty had a great fall",
"All the King's horses and all the King's men",
"Couldn't put Humpty together again",
]
datasource = dict(enumerate(data))
def mapfn(k, v):
for w in v.split():
yield w, 1
def reducefn(k, vs):
result = sum(vs)
return result
s = mincemeat.Server()
s.datasource = datasource
s.mapfn = mapfn
s.reducefn = reducefn
results = s.run_server(password="changeme")
print results
It is used for a word counting program.
I've connected two computers in network by LAN.
I have used one computer as a server and run example.py on it; and on a second computer as client, I've run mincemeat.py with following command line statement:
python mincemeat.py -p changeme server-IP
It works fine.
Now I have connected 3 computers in a LAN by a router. Then one machine works as a server and I want to run the example.py on it, and run the remaining two machines as client machines.
I want to distribute the task to my two client machines. So what is the process to distribute the task of the map and reduce to the two computers? How can I distribute my task, defined in example.py, to two client computers with their unique IPs respectively?
The default example hardly contains 50 words. So, by the time you switch windows to start the second client, the first client has finished processing the text. Instead, run the same with a big text file and you can add a second client. The below should work. I used the plain text format of novel Ulyesses(~1.5 MB) from Project Gutenberg for this example.
In my machine(Intel Xeon# 3.10 GHz), this took less than 30 seconds with 2 clients. So, use a bigger file or a list of files or be quick to start the second client.
#!/usr/bin/env python
import mincemeat
def file_contents(file_name):
f = open(file_name)
try:
return f.read()
finally:
f.close()
novel_name = 'Ulysses.txt'
# The data source can be any dictionary-like object
datasource = {novel_name:file_contents(novel_name)}
def mapfn(k, v):
for w in v.split():
yield w, 1
def reducefn(k, vs):
result = sum(vs)
return result
s = mincemeat.Server()
s.datasource = datasource
s.mapfn = mapfn
s.reducefn = reducefn
results = s.run_server(password="changeme")
print results
For a directory of files, use the following example. Dump all the text files in the folder textfiles.
#!/usr/bin/env python
import mincemeat
import glob
all_files = glob.glob('textfiles/*.txt')
def file_contents(file_name):
f = open(file_name)
try:
return f.read()
finally:
f.close()
# The data source can be any dictionary-like object
datasource = dict((file_name, file_contents(file_name))
for file_name in all_files)
def mapfn(k, v):
for w in v.split():
yield w, 1
def reducefn(k, vs):
result = sum(vs)
return result
s = mincemeat.Server()
s.datasource = datasource
s.mapfn = mapfn
s.reducefn = reducefn
results = s.run_server(password="changeme")
print results

Categories