I have a temperature sensor ([LM35][1]) interfaced with an Arduino board and my [sketch][2] is able to log values to the serial port, say /dev/ttyACM0 in Ubuntu, and I was able to install pySerial and log the temperature values into a file... I used the command
python -m serial.tools.miniterm /dev/ttyACM0 >> templogger.csv
So it will log values like
27
28
27
into the templogger.csv file.
Instead of logging into the csv file Can I log these values directily to a postresql database say Db1 with username 'abc' and password 'xyz'. Is it possible through python, Can you please help by providing the required script
Start with psycopg2, which implements the Python database API.
You'll probably want to start with simple individual INSERTs, but later on you'll be able to batch work into transactions or use COPY to improve write performance.
In general, it's better to post questions for more specific problems than this. See Stack Overflow Help. If you're stuck on something it's fine to ask for help, but it's preferable that you try to figure it out first. You'll get better results if your questions are more like: "I've tried to use psycopg2 to connect my Python program to PostgreSQL so I can write sensor logging to the database, but when I INSERT a row I get [exact error text here]. Searching for the error message hasn't helped me, so I'm a bit stuck."
In other words, try it, and if you get stuck, ask. I'd normally just close-vote a question like this as "too localized" or "not a real question", which would give you an automatic link to the question writing guide, but you're new here and you've made an effort to write a decent question so I'm trying to explain in a bit more detail.
Instead of logging into the csv file Can I log these values directily to a postresql database say Db1 with username 'abc' and password 'xyz'. Is it possible through python, Can you please help by providing the required script?
Have a good look at a SQLAlchemy tutorial, and try to make a code that can works for you. Basically, you can extrapolate from the example:
from serial import Serial
from sqlalchemy import *
class Temperature(object):
pass
class TemperatureStore:
def __init__(self, dburl):
self.db = create_engine('postgresql://%s' % dburl)
self.db.echo = True
metadata = BoundMetaData(db)
temperatures = Table('temperatures', metadata, autoload=True)
temperaturemapper = mapper(Temperature, temperatures)
self.session = create_session()
def push_temperature(self, temp):
temp = Temperature()
temp.value = temp
session.save(temp)
def main():
ser = Serial(port='/dev/ttyXXX', baudrate=115200)
tempstore = TemperatureStore('abc:xyz#localhost:5432/Db1')
while True:
tempstore.push_temperature(ser.read())
This is just an example that may or may not work (I did not test it, just wrote it in 5 minutes), so you'd better get some inspiration from that, and try to work out your own!
Related
I would like to run a large python script programmed by someone else (https://github.com/PatentsView/PatentsView-Disambiguation to be precise). At numerous times during the execution of this code, python connects to a mysql database in order to read some data. The problem is that I
a) cannot get a mysql installation on the server I use. Being only a guest at the institution whose computers I use, I cannot really influence IT to change this
b) would like to alter the code as little as possible, since I have very little python experience
The original code uses a function granted_table() that returns a mysql.connector.connect(...), where host, user, password etc. are given in the dots. My idea is to switch out this function with one that reads in TSV files instead, which I have all stored on my machine. This way, I do not have to go over the long script and fiddle around with it (I have almost no experience with python and might mess something up without realizing it).
I have tried a few things already and it almost works, but not quite. The reproducible example follows:
First, there is the original function that accesses mysql, which has to go
def granted_table(config):
if config['DISAMBIGUATION']['granted_patent_database'].lower() == 'none':
logging.info('[granted_table] no db given')
return None
else:
logging.info('[granted_table] trying to connect to %s', config['DISAMBIGUATION']['granted_patent_database'])
return mysql.connector.connect(host=config['DATABASE']['host'],
user=config['DATABASE']['username'],
password=config['DATABASE']['password'],
database=config['DISAMBIGUATION']['granted_patent_database'])
Second, the function that tries to call the above function to read in data (I shortened and simplified it but kept the structure as I understand it. This version of the function actually only prints whatever data it receives): I do not want to change this if at all possible
def build_granted(config):
cnx = granted_table(config)
cursor = cnx.cursor()
query = "SELECT uuid, patent_id, assignee_id, rawlocation_id, type, name_first, name_last, organization, sequence FROM rawassignee"
cursor.execute(query)
for rec in cursor:
print(rec)
Third, there is my attempt at a new function granted_table(config) which will behave like the old one but not access mysql
def granted_table(config):
return(placeholder())
class placeholder:
def cursor(self):
return conn_to_data()
class conn_to_data:
def execute(self,query): # class method
#define folder etc
print(query)
filename = "foo.tsv"
path ="/share/bar/"
file=open((path+filename))
print(file.read(100))
return file
Now, if I execute all of the above and then run
config="test"
build_granted(config)
the code breaks with
"Type Error: conn_to_data object is not iterable",
so it does not access the actual data as intended. However, if I change the line cursor.execute(query) to cursor=cursor.execute(query) it works as intended. I could do that if there is really no better solution, but it seems like there must be. I am a python newb, so maybe there even is a simple function/class that is intended for this and one of you can point it out. You might realize that the code currently always opens the same file, without actually interpreting the query. This is something I will still have to fix later, once it even works at all. It seems to me to be a somewhat messy but conceptually simple string-parsing problem, but if some of you have a smart idea, it would also be a huge help.
Thanks in advance!
Morning folks,
I'm trying to get a few unit tests going in Python to confirm my code is working, but I'm having a real hard time getting a Mock anything to fit into my test cases. I'm new to Python unit testing, so this has been a trying week thus far.
The summary of the program is I'm attempting to do serial control of a commercial monitor I got my hands on and I thought I'd use it as a chance to finally use Python for something rather than just falling back on one of the other languages I know. I've got pyserial going, but before I start shoving a ton of commands out to the TV I'd like to learn the unittest part so I can write for my expected outputs and inputs.
I've tried using a library called dummyserial, but it didn't seem to be recognising the output I was sending. I thought I'd give mock_open a try as I've seen it works like a standard IO as well, but it just isn't picking up on the calls either. Samples of the code involved:
def testSendCmd(self):
powerCheck = '{0}{1:>4}\r'.format(SharpCodes['POWER'], SharpCodes['CHECK']).encode('utf-8')
read_text = 'Stuff\r'
mo = mock_open(read_data=read_text)
mo.in_waiting = len(read_text)
with patch('__main__.open', mo):
with open('./serial', 'a+b') as com:
tv = SharpTV(com=com, TVID=999, tvInput = 'DVI')
tv.sendCmd(SharpCodes['POWER'], SharpCodes['CHECK'])
com.write(b'some junk')
print(mo.mock_calls)
mo().write.assert_called_with('{0}{1:>4}\r'.format(SharpCodes['POWER'], SharpCodes['CHECK']).encode('utf-8'))
And in the SharpTV class, the function in question:
def sendCmd(self, type, msg):
sent = self.com.write('{0}{1:>4}\r'.format(type,msg).encode('utf-8'))
print('{0}{1:>4}\r'.format(type,msg).encode('utf-8'))
Obviously, I'm attempting to control a Sharp TV. I know the commands are correct, that isn't the issue. The issue is just the testing. According to documentation on the mock_open page, calling mo.mock_calls should return some data that a call was made, but I'm getting just an empty set of []'s even in spite of the blatantly wrong com.write(b'some junk'), and mo().write.assert_called_with(...) is returning with an assert error because it isn't detecting the write from within sendCmd. What's really bothering me is I can do the examples from the mock_open section in interactive mode and it works as expected.
I'm missing something, I just don't know what. I'd like help getting either dummyserial working, or mock_open.
To answer one part of my question, I figured out the functionality of dummyserial. The following works now:
def testSendCmd(self):
powerCheck = '{0}{1:>4}\r'.format(SharpCodes['POWER'], SharpCodes['CHECK'])
com = dummyserial.Serial(
port='COM1',
baudrate=9600,
ds_responses={powerCheck : powerCheck}
)
tv = SharpTV(com=com, TVID=999, tvInput = 'DVI')
tv.sendCmd(SharpCodes['POWER'], SharpCodes['CHECK'])
self.assertEqual(tv.recv(), powerCheck)
Previously I was encoding the dictionary values as utf-8. The dummyserial library decodes whatever you write(...) to it so it's a straight string vs. string comparison. It also encodes whatever you're read()ing as latin1 on the way back out.
I am able to connect to aws-redshift with psycopg2 using python, I can query tables and get data back, etc...
However, when I try to run a create udf fucntion through psycopg2, nothing happens, no error returns but nothing gets created.
Here's my code:
def _applyFunctionToDB():
con=psycopg2.connect(dbname = redhsiftDatabase, host = redshiftHost, port = '5439', user = redshiftUser, password = redshiftPwd)
cur = con.cursor()
udf=_fileOpenWrite(udfFile)
size = os.stat(udfFile).st_size
udfCode=udf.read(size)
cur.execute(udfCode)
con.close()
I have run it through the debugger and all the pieces are there, but nothing happens when the "execute" method is invoked on the cursor.
If anyone has any advice and/or ideas on what might be going on here, please advise.
Thanks!
found answer just after posting here: Copying data from S3 to AWS redshift using python and psycopg2
I need to invoke a commit.
So, add con.commit in above code after execute.
Good Day.
I have faced following issue using pymongo==2.1.1 in python2.7 with mongo 2.4.8
I have tried to find solution using google and stack overflow but failed.
What's the issue?
I have following function
from bson.code import Code
def read(groupped_by=None):
reducer = Code("""
function(obj, prev){
prev.count++;
}
""")
client = Connection('localhost', 27017)
db = client.urlstats_database
results = db.http_requests.group(key={k:1 for k in groupped_by},
condition={},
initial={"count": 0},
reduce=reducer)
groupped_by = list(groupped_by) + ['count']
result = [tuple(res[col] for col in groupped_by) for res in results]
return sorted(result)
Then I am trying to write test for this function
class UrlstatsViewsTestCase(TestCase):
test_data = {'data%s' % i : 'data%s' % i for i in range(6)}
def test_one_criterium(self):
client = Connection('localhost', 27017)
db = client.urlstats_database
for column in self.test_data:
db.http_requests.remove()
db.http_requests.insert(self.test_data)
response = read([column])
self.assertEqual(response, [(self.test_data[column], 1)])
this test sometimes fails as I understand because of latency. As I can see response has not cleaned data in it
If I add delay after remove test pass all the time.
Is there any proper way to test such functionality?
Thanks in Advance.
A few questions regarding your environment / code:
What version of pymongo are you using?
If you are using any of the newer versions that have MongoClient, is there any specific reason you are using Connection instead of MongoClient?
The reason I ask second question is because Connection provides fire-and-forget kind of functionality for the operations that you are doing while MongoClient works by default in safe mode and is also preferred approach of use since mongodb 2.2+.
The behviour that you see is very conclusive for Connection usage instead of MongoClient. While using Connection your remove is sent to server, and the moment it is sent from client side, your program execution moves to next step which is to add new entries. Based on latency / remove operation completion time, these are going to be conflicting as you have already noticed in your test case.
Can you change to use MongoClient and see if that helps you with your test code?
Additional Ref: pymongo: MongoClient or Connection
Thanks All.
There is no MongoClient class in version of pymongo I use. So I was forced to find out what exactly differs.
As soon as I upgrade to 2.2+ I will test whether everything is ok with MongoClient. But as for connection class one can use write concern to control this latency.
I older version One should create connection with corresponding arguments.
I have tried these twojournal=True, safe=True (journal write concern can't be used in non-safe mode)
j or journal: Block until write operations have been commited to the journal. Ignored if the server is running without journaling. Implies safe=True.
I think this make performance worse but for automatic tests this should be ok.
Hello guys I am currently working in a python project at my school. First I want to make clear that I'm not a python programmer (I was just called to put out the flames in this project because no one else would and I was brave enough to say yes).
I have the following problem here. I have to write a method that connects to an existing localhost MySQL database (I'm using connector version 1.0.12 and python 2.6) and then does pretty basic stuff. The parameters are sent by a GTK-written GUI (I didn't write that interface). So I wrote my method like this:
def compMySQL(self, user, database, password, db_level, table_level, column_level):
sql_page_textview = self.mainTree.get_widget('sql_text_view')
sql_page_textview.modify_font(pango.FontDescription("courier 10"))
sql_page_buffer = sql_page_textview.get_buffer()
#Gonna try connecting to DB
try:
print("Calling conn with U:{0} P:{1} DB:{2}".format(user,password,database))
cnxOMC = mysql.connector.connect(user, password,'localhost',database)
except:
print "Error: Database connection failed. User name or Database name may be wrong"
return
#More code ...
But when I run my code I get this:
Calling conn with U:root P:PK17LP12r DB:TESTERS
Error: Database connection failed. User name or Database name may be wrong
And I don't know why, since the arguments sent are the same arguments that get printed (telling me that the GUI the other guy coded works fine) and they are valid login parameters. If I hardcode the login parameters directly insetad of using the GUI everything goes ok and the functions executes properly; the following code executes nice and smooth:
def compMySQL(self, user, database, password, db_level, table_level, column_level):
sql_page_textview = self.mainTree.get_widget('sql_text_view')
sql_page_textview.modify_font(pango.FontDescription("courier 10"))
sql_page_buffer = sql_page_textview.get_buffer()
#Gonna try hardcoding
try:
#print("Calling conn with U:{0} P:{1} DB:{2}".format(user,password,database))
cnxOMC = mysql.connector.connect(user="root", password='PK17LP12r', host='localhost', database='TESTERS')
print 'No prob with conn'
except:
print "Error: Database connection failed. User name or Database name may be wrong"
return
#more code ...
Console output:
No prob with conn
Any ideas guys? This one is killing me. I'm just learning Python but I imagine the problem to be something very easy for a seasoned python developer so any help would be strongly appreciated.
Thanks in advance.
The difference between the two versions is not that you are hard coding the parameters in the second one, it is that you are calling by keyword args rather than positional ones. The docs for MySQL connector don't seem to give the actual positional order, and there's no reason to think they are in the order you've given, so looks like you should always call by kwarg:
cnxOMC = mysql.connector.connect(user=user, password=password,host=host,database=database)