Im trying to send a variable in an sms using gammu. Im using the gammu smsd runonreceive to run a python script when I send a message to my raspberry pi from my phone. This is what the script looks like.
#!/usr/bin/python
import os
os.system("sh .webgps.sh > coordinates.text")
file = "/home/pi/coordinates.text"
with open(file) as f:
(lat, long) = f.read().splitlines()
os.system("echo lat | sudo gammu-smsd-inject TEXT 07xxxxxxxxx")
What this script does is it runs a shell script which gets the latitude and longitude from my gps module and puts them in a text file. Then it gets the values from the text file and puts the latitude in the lat variable and the longitude in the long variable. I can verify that this works because when I print the variables I can see latitude and longitude and they are the same values as the ones in the text file.
Now the bit that im having problems with is sending the values to my phone. If i run the python script how it currently is, then i get a message on my phone which says lat. What I want is to be sent the actual values for latitude and longitude and I dont know how to put the variables into the gammu inject text line.
You receive "lat" in your phone, because the python var "lat" is not parsed that easy in the os.system echo call.
Sending a python variable to the shell is a bit weird story.
One solution that has worked for me in a similar situation is one like this:
with open(file) as f:
(lat, long) = f.read().splitlines()
cmd="echo "+lat+" | sudo gammu-smsd-inject TEXT 07xxxxxxxxx"
os.system(cmd)
Better use your own library of gammu, Python-gammu allows you to easily straight forward access the phone, and better error handling. Many examples are available in examples/ directory in the python-gammu sources.
On Ubuntu it is advised to use the distributions repository. So installing python-gammu should be per apt manager:
apt-get install python-gammu
Here is an example of the script: Sending a message
#!/usr/bin/env python
# Sample script to show how to send SMS
from __future__ import print_function
import gammu
import sys
# Create object for talking with phone
state_machine = gammu.StateMachine()
# Optionally load config file as defined by first parameter
if len(sys.argv) > 2:
# Read the configuration from given file
state_machine.ReadConfig(Filename=sys.argv[1])
# Remove file name from args list
del sys.argv[1]
else:
# Read the configuration (~/.gammurc)
state_machine.ReadConfig()
# Check parameters
if len(sys.argv) != 2:
print('Usage: sendsms.py [configfile] RECIPIENT_NUMBER')
sys.exit(1)
# Connect to the phone
state_machine.Init()
# Prepare message data
# We tell that we want to use first SMSC number stored in phone
message = {
'Text': 'python-gammu testing message',
'SMSC': {'Location': 1},
'Number': sys.argv[1],
}
# Actually send the message
state_machine.SendSMS(message)
Related
How do I iterate through the mount points of a Linux system using Python? I know I can do it using df command, but is there an in-built Python function to do this?
Also, I'm just writing a Python script to monitor the mount points usage and send email notifications. Would it be better / faster to do this as a normal shell script as compared to a Python script?
Thanks.
The Python and cross-platform way:
pip install psutil # or add it to your setup.py's install_requires
And then:
import psutil
partitions = psutil.disk_partitions()
for p in partitions:
print p.mountpoint, psutil.disk_usage(p.mountpoint).percent
Running the mount command from within Python is not the most efficient way to solve the problem. You can apply Khalid's answer and implement it in pure Python:
with open('/proc/mounts','r') as f:
mounts = [line.split()[1] for line in f.readlines()]
import smtplib
import email.mime.text
msg = email.mime.text.MIMEText('\n'.join(mounts))
msg['Subject'] = <subject>
msg['From'] = <sender>
msg['To'] = <recipient>
s = smtplib.SMTP('localhost') # replace 'localhost' will mail exchange host if necessary
s.sendmail(<sender>, <recipient>, msg.as_string())
s.quit()
where <subject>, <sender> and <recipient> should be replaced by appropriate strings.
The bash way to do it, just for fun:
awk '{print $2}' /proc/mounts | df -h | mail -s `date +%Y-%m-%d` "you#me.com"
I don't know of any library that does it but you could simply launch mount and return all the mount points in a list with something like:
import commands
mount = commands.getoutput('mount -v')
mntlines = mount.split('\n')
mntpoints = map(lambda line: line.split()[2], mntlines)
The code retrieves all the text from the mount -v command, splits the output into a list of lines and then parses each line for the third field which represents the mount point path.
If you wanted to use df then you can do that too but you need to remove the first line which contains the column names:
import commands
mount = commands.getoutput('df')
mntlines = mount.split('\n')[1::] # [1::] trims the first line (column names)
mntpoints = map(lambda line: line.split()[5], mntlines)
Once you have the mount points (mntpoints list) you can use for in to process each one with code like this:
for mount in mntpoints:
# Process each mount here. For an example we just print each
print(mount)
Python has a mail processing module called smtplib, and one can find information in the Python docs
Background
I'm working on a bash script to pull serial numbers and part numbers from all the devices in a server rack, my goal is to be able to run a single script (inventory.sh) and walk away while it generates text files containing the information I need. I'm using bash for maximum compatibility, the RHEL 6.7 systems do have Perl and Python installed, however they have minimal libraries. So far I haven't had to use anything other than bash, but I'm not against calling a Perl or Python script from my bash script.
My Problem
I need to retrieve the Serial Numbers and Part numbers from the drives in a Dot Hill Systems AssuredSAN 3824, as well as the Serial numbers from the equipment inside. The only way I have found to get all the information I need is to connect over SSH and run the following three commands dumping the output to a local file:
show controllers
show frus
show disks
Limitations:
I don't have "sshpass" installed, and would prefer not to install it.
The Controller is not capable of storing SSH keys ( no option in custom shell).
The Controller also cannot write or transfer local files.
The Rack does NOT have access to the Internet.
I looked at paramiko, but while Python is installed I do not have pip.
I also cannot use CPAN.
For what its worth, the output comes back in XML format. (I've already written the code to parse it in bash)
Right now I think my best option would be to have a library for Python or Perl in the folder with my other scripts, and write a script to dump the commands' output to files that I can parse with my bash script. Which language is easier to just provide a library in a file? I'm looking for a library that is as small and simple as possible to use. I just need a way to get the output of those commands to XML files. Right now I am just using ssh 3 times in my script and having to enter the password each time.
Have a look at SNMP. There is a reasonable chance that you can use SNMP tools to remotely extract the information you need. The manufacturer should be able to provide you with the MIBs.
I ended up contacting the Manufacturer and asking my question. They said that the system isn't setup for connecting without a password, and their SNMP is very basic and won't provide the information I need. They said to connect to the system with FTP and use "get logs " to download an archive of the configuration and logs. Not exactly ideal as it takes 4 minutes just to run that one command but it seems to be my only option. Below is the script I wrote to retrieve the file automatically by adding the login credentials to the .netrc file. This works on RHEL 6.7:
#!/bin/bash
#Retrieve the logs and configuration from a Dot Hill Systems AssuredSAN 3824 automatically.
#Modify "LINE" and "HOST" to fit your configuration.
LINE='machine <IP> login manage password <password>'
HOST='<IP>'
AUTOLOGIN="/root/.netrc"
FILE='logfiles.zip'
#Check for and verify the autologin file
if [ -f $AUTOLOGIN ]; then
printf "Found auto-login file, checking for proper entry... \r"
READLINE=`cat $AUTOLOGIN | grep "$LINE"`
#Append the line to the end of .netrc if file exists but not the line.
if [ "$LINE" != "$READLINE" ]; then
printf "Proper entry not found, creating it... \r"
echo "$LINE" >> "$AUTOLOGIN"
else
printf "Proper entry found... \r"
fi
#Create the Autologin file if it doesn't exist
else
printf "Auto-Login file does not exist, creating it and setting permissions...\r"
echo "$LINE" > "$AUTOLOGIN"
chmod 600 "$AUTOLOGIN"
fi
#Start getting the information from the controller. (This takes a VERY long time)
printf "Retrieving Storage Controller data, this will take awhile... \r"
ftp $HOST << SCRIPT
get logs $FILE
SCRIPT
exit 0
This gave me a bunch of files in the zip, but all I needed was the "store_....logs" file. It was about 500,000 lines long, the first portion is the entire configuration in XML format, then the configuration in text format, followed by the logs from the system. I parsed the file and stripped off the logs at the end which cut the file down to 15,000 lines. From there I divided it into two files (config.xml and config.txt). I then pulled the XML output of the 3 commands that I needed and it to the 3 files my previously written script searches for. Now my inventory script pulls in everything it needs, albeit pretty slow due to waiting 4 minutes for the system to generate the zip file. I hope this helps someone in the future.
Edit:
Waiting 4 minutes for the system to compile was taking too long. So I ended up using paramiko and python scripts to dump output from the commands to files that my other code can parse. It accepts the IP of the Controller as a parameter. Here is the script for those interested. Thank you again for all the help.
#!/usr/bin/env python
#Saves output of "show disks" from the storage Controller to an XML file.
import paramiko
import sys
import re
import xmltodict
IP = sys.argv[1]
USERNAME = "manage"
PASSWORD = "password"
FILENAME = "./logfiles/disks.xml"
cmd = "show disks"
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
try:
client.connect(IP,username=USERNAME,password=PASSWORD)
stdin, stdout, stderr = client.exec_command(cmd)
except Exception as e:
sys.exit(1)
data = ""
for line in stdout:
if re.search('#', line):
pass
else:
data += line
client.close()
f = open(FILENAME, 'w+')
f.write(data)
f.close()
sys.exit(0)
I have the following program where I'm redirecting an output on terminal to a text file :
import telnetlib
import time
import string
tn = telnetlib.Telnet("192.168.1.102")
print "Attempting Telnet connection...."
tn.read_until("login: ")
tn.write("root1\n")
time.sleep(1)
tn.read_until("Password:")
tn.write("\n")
tn.write("\n")
tn.read_until("root1#mypc:~$")
tn.write("su\n")
tn.read_until("root#mypc:/home/root1")
tn.write("\n")
print "Telnet logged in successfully....\n"
tn.write("head /proc/meminfo > /home/a.txt")
tn.write("\n")
I would like to copy the textual contents of this file to a buffer variable and process it. That is, I don't want to read from the console/terminal. I just want to redirect the output to a text file and then read from the text file. Does telnetlib offer any direct function to achieve this or any alternate way to do the same?
TELNET protocol is more or less a distant terminal emulation. It offers no file transfert facilities because other protocols deal with that. That means that once you have written the file on remote system, you will have to display it with cat and store the output of the cat command.
Alternatively you could use a protocol meant for file transfert like FTP, RSYNC, SFTP, FTPS, etc. to download the remote file. Just use the one that is accessible on your remote system.
EDIT this code reads from a local file on the remote host
Please try the following code, which assume that after you execute your command, you get the following string: "root#mypc:/home/root1"
tn.write("cat /home/a.txt")
tn.write("\n")
data = ''
while data.find("root#mypc:/home/root1") == -1:
data = tn.read_very_eager()
print data
I would like to be able to call a python script that checks to see if the variables passed to it have already been passed to it and if not then spit out a KML file for google earth to read. I've looked at environment variables to no avail. But essentially I need to store a string so that the next time the script is called I can reference it. I'll post what I have below. Thanks and any help is greatly appreciated.
EDIT: I suppose I didn't clearly state the issue, I'm attempting to call a python script on an Apache server with KML passing URL variables to the script. One of the URL variables contains a time string, I would like to store that time and be able to reference it to the next "Time" that is passed to the script, IF the times don't match then print out certain KML, If they DO match then print empty script so that Google Earth doesn't duplicate a placemark. In essence I am filtering the KML files so that I can avoid duplicates. I've also updated the code below.
import cgi
import os
url = cgi.FieldStorage()
bbox = url['test'].value
bbox = bbox.split(',')
lat = float(bbox[0])
lon = float(bbox[1])
alt = float(bbox[2])
when = str(bbox[3])
if when == os.environ['TEMP']:
kml = ('<?xml version="1.0" encoding="UTF-8"?>\n'
'<kml xmlns="http://www.opengis.net/kml/2.2">\n'
'</kml>')
else:
kml = ('<?xml version="1.0" encoding="UTF-8"?>\n'
'<kml xmlns="http://www.opengis.net/kml/2.2">\n'
'<NetworkLinkControl>\n'
'<Update>\n'
'<targetHref>http://localhost/GE/Placemark.kml</targetHref>\n'
'<Create>\n'
'<Folder targetId="fld1">\n'
'<Placemark>\n'
'<name>View-centered placemark</name>\n'
'<TimeStamp>\n'
'<when>%s</when>\n'
'</TimeStamp>\n'
'<Point>\n'
'<coordinates>%.6f,%.6f,%.6f</coordinates>\n'
'</Point>\n'
'</Placemark>\n'
'</Folder>\n'
'</Create>\n'
'</Update>\n'
'</NetworkLinkControl>\n'
'</kml>'
) % (when, lat, lon, alt)
os.environ['TEMP'] = when
print('Content-Type: application/vnd.google-earth.kml+xml\n')
print(kml)
It seems like you have a few options here to share a state:
Use a db.
file system based persistence
a separate daemon process that you can connect to via sockets
Use memcache or some other service to store in memory
You can share states via python: https://docs.python.org/2/library/multiprocessing.html#sharing-state-between-processes)
You can also create a manager and have proxy objects:
Im using a raspberry pi with raspbian, Debain Wheezy Jan 2014 and python3
I'm starting a python script from rc.local that captures a keyboard input and writes to a file, without logging in.
If the file that the script is writing to has not been created yet, the first keyboard input registers on the screen but isn't written to the file. All subsequent writes work fine.
My code works fine when I run it from the command line as user that's logged in, the first line is written to the new file as expected.
EDITED CODE FROM MIDNIGHTER
#!/usr/bin/env python3.2
import sys
from datetime import datetime
def main():
f = open('/home/pi/cards.csv','r')
sim = f.read()
sim = sim.split('\n')
simSet = set(sim)
while True:
try:
log = open('logs', 'a')
puk = input() # text input, i.e., always a string
included = "true" if puk in simSet else "false"
print(included, puk)
log.write("{included: %s, time: %s, number: %s}, \n" % (included, datetime.now(), puk))
log.close()
except ValueError:
log.close()
main()
And the rc.local
sudo python3 /home/pi/rf1
Im just learning this, please excuse the poor execution.
SOLUTION
I realise now I left out an important detail about a cron job closing and copying the file that was being written to.
I found my answer here what exactly the python's file.flush() is doing?
Instead of file.close.() I used file.flush() and it works.
Code below:
#!/usr/bin/env python3.2
import sys
from datetime import datetime
def main():
f = open('/home/pi/cards.csv','r')
sim = f.read()
sim = sim.split('\n')
simSet = set(sim)
log = open('logs', 'a')
while True:
try:
puk = input() # text input, i.e., always a string
included = "true" if puk in simSet else "false"
print(included, puk)
log.write("{included: %s, time: %s, number: %s}, \n" % (included, datetime.now(), puk))
log.flush()
except ValueError:
log.flush()
main()
The problem was I was running a cron job that copied the data to another file which was accessing the file being written to in the python program.
The first write after this was not saving to the file as it was being accessed by another program.
These paragraphs seem to be what was happening:
https://stackoverflow.com/a/7127162/1441620
The first, flush, will simply write out any data that lingers in a
program buffer to the actual file. Typically this means that the data
will be copied from the program buffer to the operating system buffer.
Specifically what this means is that if another process has that same
file open for reading, it will be able to access the data you just
flushed to the file. However, it does not necessarily mean it has been
"permanently" stored on disk.
I think #Midnighter also suggested using a withstatement to open and close the file would have also solved it.
Updated code is in the question > solution.