Convert a String To Integer From File Located on SD Card - python
I am having a problem converting strings to integers so I can perform math functions. I have read about str() and int() but it doesn't seem to work here. I am trying to save data from one program to a file located on my Raspberry Pi 2 SD card using the code shown below; problem area is noted in CAPS near the bottom. I learned all data getting saved to the file are in string format. So no problem, just convert it back to an integer when I get the following error message:
ValueError: invalid literal for int() with base 10
I tried Python versions 2 and 3 on my Raspberry Pi 2. The reason for this is that I have a counter in my main program that I want to update with its last position in the event of loss of power to the Raspberry Pi.
I have been pulling my hair out on this. Can someone please help me find the answer. I have been unable to find it myself on the internet, or in two Python books I purchased.
from __future__ import print_function
import datetime #date and time library
# We begin by creating the file and writing some data.
webcam_home = open("home.txt", "a")
n = 1
m = 10
for i in range(0,5):
n = n*10
m = m*2
webcam_home.write(str(n))
webcam_home.write("%s\n" % m)
webcam_home.close()
# Now, we open the file and read the contents printing out
# those rows that have values in
webcam_home = open("home.txt", "r")
rows = webcam_home.readlines();
for row in rows:
print(">", row)
A = row
print("6",int("A")+1,"abc")
webcam_home.close()
Here is the answer to my problem: I ran your code with line print("6",int("A")+1,"abc") replaced with print("6",int(A)+1,"abc") and got no errors. – L. Kolar
My issue was a corrupted SD Card. I replaced it with my back up and tried what L.Kolar said and life is good again. It worked! I spent waaaay too many hours getting to this point and thank everyone who tried to help me, with special thanks to L.Kolar.
I did some work on trying to get MySQL running and think I must have altered something to corrupt my SD card during the process.
Related
Could anyone give a simple but generally useful template for python code to be embedded into LibreOffice Calc?
Update acknowledging comment 1 and 4: Indentation corrected, does compile in Geany, obvious redundancies stripped. The first question is: why am I getting the args error when I try to run my code? The error message refers to a LO file.... (linux) /opt/libreoffice7.2/programs/pythonscript.py # line 915. An excerpt from that file says... def invoke(self, args, out, outindex ): # 910 log.debug( "PythonScript.invoke " + str( args ) ) # 911 try: # 912 if (self.args): # 913 args += self.args # 915 ret = self.func( *args ) # 915 My code is... # -*- coding: UTF-8 -*- from __future__ import unicode_literals # Gets the current document doc = XSCRIPTCONTEXT.getDocument() ctx = XSCRIPTCONTEXT.getComponentContext() sm = CTX.ServiceManager import time import datetime import serial import openpyxl def PySerLO (args=None): from datetime import date from openpyxl import load_workbook ser = serial.Serial('/dev/ttyUSB0') # ""ls /dev/ttyUSB* -l"" if unsure. # Load the workbook and select the sheet... wb = load_workbook('/media/mpa/UserFiles/LibreOffice/mpaNEW-spreadsheet.xlsx') sheet = wb['Sheet1'] headings = ("Date", "Time", "Raw Data") sheet.append(headings) try: while True: # Get current date, Get current time, Read the serial port today = date.today() now = datetime.datetime.now().time() data = ser.readline().decode() if data != "": row = (today, now, float((data))) # For some reason the first data point is garbage so don't do anything with this row = (today, ("%s"%now), float((data))) sheet.append(row) ser.flush() #Save the workbook to preserve data wb.save('/media/mpa/UserFiles/LibreOffice/mpaNEW-spreadsheet.xlsx') finally: # Make sure the workbook is saved at end wb.save('/media/mpa/UserFiles/LibreOffice/mpaNEW-spreadsheet.xlsx') print('Data Finished') I get this error on executing a macro executing command button... I appreciate your patience and suggestions on improvement thus far. The ultimate aim of this is to track and instantly plot scientific data, introduce my students to the power of LibreOffice - They download LO and I just pass them the fully contained spreadsheet. They then connect the hardware and they have an excellent resource on LO forever. Almost none of them have ever heard of LibreOffice (or Linux either!)
First of all, I do not think you should start by embedding the macro in the document. I don't think it is required for the reasons you seem to believe. Instead, put your code in My Macros (the LibreOffice user directory) where it is easier to develop. Then call it from a button or by going to Tools > Macros > Run Macro - no need to use APSO as it seems like that is causing you even more confusion. The only reason for embedding is for convenience, to make it easier to keep the document and a bit of code together. For complex code that needs to be given to multiple people, it's generally better to create a full-fledged extension. Second, it looks like the error may be from M1.py line 22 - is this your code? I cannot tell where line 22 might be or where exactly the error is coming from. Most of us reading your question are looking for the code where the error message comes from, but we cannot find it because you did not provide the relevant file name or line number of your code. It is your job to simplify your code enough so that you can see where the error is coming from, and then give that information in the question. The error mentions create_instance which is the call for creating an UNO service. Also I am not comfortable with the idea of storing XSCRIPTCONTEXT values in global variables unless you're sure you know what you're doing (and it's pretty clear you don't. :) ) Also I have strong doubts about achieving success by combining different libraries - it looks like you want to use both openpyxl and the UNO interface in the same code, and I would not expect that to turn out well. Just stick with plain UNO because there are plenty of people who can help with that. (Or if you choose openpyxl then I will ignore your question and someone knowledgeable about that may be able to help.) Finally, it seems like import uno is missing. That alone could explain the error, and it's hard to imagine how you have all this code yet didn't even start with that. Before trying all of this complex code, get a simple example working. It looks like there are a lot of python-uno fundamentals you still need to figure out first, and it doesn't look like you have put much effort into working through a tutorial yet.
How can I prevent my Python web-scraper from stopping?
Ahoy! I've written a quick (Python) program that grabs the occupancy of a climbing gym every five minutes for later analysis. I'd like it to run non-stop, but I've noticed that after a couple hours pass, one of two things will happen. It will detect a keyboard interrupt (which I did not enter) and stop, or It will simply stop writing to the .csv file without showing any failure in the shell. Here is the code: import os os.chdir('~/Documents/Other/g1_capacity') #ensure program runs in correct directory if opened elsewhere import requests import time from datetime import datetime import numpy as np import csv def get_count(): url = 'https://portal.rockgympro.com/portal/public/b01ab221559163c5e9a73e078fe565aa/occupancy?&iframeid=occupancyCounter&fId=' text = requests.get(url).text line = "" for item in text.split("\n"): if "\'count\'" in item: line = (item.strip()) count = int(line.split(":")[1][0:-1]) #really gross way to get count number for this specific source return count while True: #run until manual stop with open('g1_occupancy.csv', mode='a') as occupancy: occupancy_writer = csv.writer(occupancy) occupancy_writer.writerow([datetime.now(), get_count()]) #append new line to .csv with timestamp and current count time.sleep(60 * 5) #wait five minutes before adding new line I am new to web scraping (in fact, this is my first time) and I'm wondering if anyone might have a suggestion to help eliminate the issue I described above. Many thanks!
form automation 12 digit number extracting with decimal
I'm working on a form automation script in python. It extracts the data from a excel file and fills out in a online form. The problem I'm facing is with a 12-digit number in a column in excel file data. The number seems fine the excel file by using custom setting for it but when the python script extracts the data it appears as a hexadecimal number. I've tried using many things but nothing really seems to work. I'm using xlrd. My current script stradh = str(sheet.cell(row,col).value) browser.find_element_by_id('number').send_keys(stradh) Number in excel file: 357507103697 Number when extracting from python script: 3.57507103697e+11 Thank you.
you need the decimal module and convert no from scientific no from xlrd import * import decimal workbook = open_workbook('temp.xlsx') sheet = workbook.sheet_by_index(2) value = sheet.cell_value(0, 0) print decimal.Decimal(value)
You can try using longint.Try, long(sheet.cell(row,col).value). I hope this helps.If you need string then you can use str on long.
How to use Python to extract data from the Met Office JSON download
I am using Python 3.4. I have started a project to download the UK Met Office Forecast data (in JSON format) and use the information as a weather compensator for my home heating system. I have succeeded in downloading the JSON datafile from the MET Office, and now I want to extract the info I need. I can do this by converting the file to a string and using .find and .int methods to extract the data, but this seems crude (but effective). As JSON is said to be a well-used data interchange format, there must be a better way to do this. I have found things like json.load and json.loads, and also json.JSONDecoder.decode but I haven't had any success in using these, and I really have little idea of what I am doing! My code is: import urllib.request import json #Comment: THIS IS THE CALL TO GET THE MET OFFICE FILE FROM THE INTERNET #Comment: **** = my personal met office API key, which I had better keep to myself response = urllib.request.urlopen('http://datapoint.metoffice.gov.uk/public/data/val/wxfcs/all/json/354037?res=3hourly&key=****') FCData = response.read() FCDataStr = str(FCData) #Comment: END OF THE CALL TO GET MET OFFICE FILE FROM THE INTERNET #Comment: Example of data extraction ChPos = FCDataStr.find('"DV"') #Find "DV" ChPos = FCDataStr.find('"dataDate"', ChPos, ChPos+50) #Find "dataDate" FileDataDate = FCDataStr[ChPos+12:ChPos+22] #Extract the date of the file #Comment: And so on When using json.loads(FCDataStr) I get the following error message: "ValueError: Expecting value: line 1 column 1 (char 0)" By deleting the b' at the start and the ' at the end, this error goes away (see below). Printing the JSON file in string format, using print(FCDataStr) gives: b'{"SiteRep":{"Wx":{"Param":[{"name":"F","units":"C","$":"Feels Like Temperature"},{"name":"G","units":"mph","$":"Wind Gust"},{"name":"H","units":"%","$":"Screen Relative Humidity"},{"name":"T","units":"C","$":"Temperature"},{"name":"V","units":"","$":"Visibility"},{"name":"D","units":"compass","$":"Wind Direction"},{"name":"S","units":"mph","$":"Wind Speed"},{"name":"U","units":"","$":"Max UV Index"},{"name":"W","units":"","$":"Weather Type"},{"name":"Pp","units":"%","$":"Precipitation Probability"}]},"DV":{"dataDate":"2014-07-29T20:00:00Z","type":"Forecast","Location":{"i":"354037","lat":"51.7049","lon":"-2.9022","name":"USK","country":"WALES","continent":"EUROPE","elevation":"43.0","Period":[{"type":"Day","value":"2014-07-29Z","Rep":[{"D":"NNW","F":"22","G":"11","H":"51","Pp":"4","S":"9","T":"24","V":"VG","W":"7","U":"7","$":"900"},{"D":"NW","F":"19","G":"16","H":"61","Pp":"8","S":"11","T":"22","V":"EX","W":"8","U":"1","$":"1080"},{"D":"NW","F":"16","G":"20","H":"70","Pp":"1","S":"11","T":"18","V":"VG","W":"2","U":"0","$":"1260"}]},{"type":"Day","value":"2014-07-30Z","Rep":[{"D":"NW","F":"13","G":"16","H":"84","Pp":"0","S":"7","T":"14","V":"VG","W":"0","U":"0","$":"0"},{"D":"WNW","F":"12","G":"13","H":"90","Pp":"0","S":"7","T":"13","V":"VG","W":"0","U":"0","$":"180"},{"D":"WNW","F":"13","G":"11","H":"87","Pp":"0","S":"7","T":"14","V":"GO","W":"1","U":"1","$":"360"},{"D":"SW","F":"18","G":"9","H":"67","Pp":"0","S":"4","T":"19","V":"VG","W":"1","U":"2","$":"540"},{"D":"WNW","F":"21","G":"13","H":"56","Pp":"0","S":"9","T":"22","V":"VG","W":"3","U":"6","$":"720"},{"D":"W","F":"21","G":"20","H":"55","Pp":"0","S":"11","T":"23","V":"VG","W":"3","U":"6","$":"900"},{"D":"W","F":"18","G":"22","H":"57","Pp":"0","S":"11","T":"21","V":"VG","W":"1","U":"2","$":"1080"},{"D":"WSW","F":"16","G":"13","H":"80","Pp":"0","S":"7","T":"16","V":"VG","W":"0","U":"0","$":"1260"}]},{"type":"Day","value":"2014-07-31Z","Rep":[{"D":"SW","F":"14","G":"11","H":"91","Pp":"0","S":"4","T":"15","V":"GO","W":"0","U":"0","$":"0"},{"D":"SW","F":"14","G":"11","H":"92","Pp":"0","S":"4","T":"14","V":"GO","W":"0","U":"0","$":"180"},{"D":"SW","F":"15","G":"11","H":"89","Pp":"3","S":"7","T":"16","V":"GO","W":"3","U":"1","$":"360"},{"D":"WSW","F":"17","G":"20","H":"79","Pp":"28","S":"11","T":"18","V":"GO","W":"3","U":"2","$":"540"},{"D":"WSW","F":"18","G":"22","H":"72","Pp":"34","S":"11","T":"20","V":"GO","W":"10","U":"5","$":"720"},{"D":"WSW","F":"18","G":"22","H":"66","Pp":"13","S":"11","T":"20","V":"VG","W":"7","U":"5","$":"900"},{"D":"WSW","F":"17","G":"22","H":"69","Pp":"36","S":"11","T":"19","V":"VG","W":"10","U":"2","$":"1080"},{"D":"WSW","F":"16","G":"16","H":"84","Pp":"6","S":"9","T":"17","V":"GO","W":"2","U":"0","$":"1260"}]},{"type":"Day","value":"2014-08-01Z","Rep":[{"D":"SW","F":"16","G":"13","H":"91","Pp":"4","S":"7","T":"16","V":"GO","W":"7","U":"0","$":"0"},{"D":"SW","F":"15","G":"11","H":"93","Pp":"5","S":"7","T":"16","V":"GO","W":"7","U":"0","$":"180"},{"D":"SSW","F":"15","G":"11","H":"93","Pp":"7","S":"7","T":"16","V":"GO","W":"7","U":"1","$":"360"},{"D":"SSW","F":"17","G":"18","H":"79","Pp":"14","S":"9","T":"18","V":"GO","W":"7","U":"2","$":"540"},{"D":"SSW","F":"17","G":"22","H":"74","Pp":"43","S":"11","T":"19","V":"GO","W":"10","U":"5","$":"720"},{"D":"SW","F":"16","G":"22","H":"81","Pp":"48","S":"11","T":"18","V":"GO","W":"10","U":"5","$":"900"},{"D":"SW","F":"16","G":"18","H":"80","Pp":"55","S":"9","T":"17","V":"GO","W":"12","U":"1","$":"1080"},{"D":"SSW","F":"15","G":"16","H":"89","Pp":"38","S":"7","T":"16","V":"GO","W":"9","U":"0","$":"1260"}]},{"type":"Day","value":"2014-08-02Z","Rep":[{"D":"S","F":"14","G":"11","H":"94","Pp":"15","S":"7","T":"15","V":"GO","W":"7","U":"0","$":"0"},{"D":"SSE","F":"14","G":"11","H":"94","Pp":"16","S":"7","T":"15","V":"GO","W":"7","U":"0","$":"180"},{"D":"S","F":"14","G":"13","H":"93","Pp":"36","S":"7","T":"15","V":"GO","W":"10","U":"1","$":"360"},{"D":"S","F":"15","G":"20","H":"84","Pp":"62","S":"11","T":"17","V":"GO","W":"14","U":"2","$":"540"},{"D":"SSW","F":"16","G":"22","H":"78","Pp":"63","S":"11","T":"18","V":"GO","W":"14","U":"5","$":"720"},{"D":"WSW","F":"16","G":"27","H":"66","Pp":"59","S":"13","T":"19","V":"VG","W":"14","U":"5","$":"900"},{"D":"WSW","F":"15","G":"25","H":"68","Pp":"39","S":"13","T":"18","V":"VG","W":"10","U":"2","$":"1080"},{"D":"SW","F":"14","G":"16","H":"80","Pp":"28","S":"9","T":"15","V":"VG","W":"0","U":"0","$":"1260"}]}]}}}}' The result of using: DecodedJSON = json.loads(FCDataStr) print(DecodedJSON) gives a very similar result to the original FCDataStr file. How do I proceed to extract the data (such as temperature, wind speed etc for each 3 hourly forecast) from the file?
For other clueless people who may want to use the UK Met Office 3-hourly forecast data feed, below is the solution that I am using: import urllib.request import json ### THIS IS THE CALL TO GET THE MET OFFICE FILE FROM THE INTERNET response = urllib.request.urlopen('http://datapoint.metoffice.gov.uk/public/data/val/wxfcs/all/json/**YourLocationID**?res=3hourly&key=**your_api_key**') FCData = response.read() FCDataStr = FCData.decode('utf-8') ### END OF THE CALL TO GET MET OFFICE FILE FROM THE INTERNET #Converts JSON data to a dictionary object FCData_Dic = json.loads(FCDataStr) #The following are examples of extracting data from the dictionary object. #The JSON data is heavily nested. #Each [] goes one level down, usually defined with {} in the JSON data. dataDate = (FCData_Dic['SiteRep']['DV']['dataDate']) print('dataDate =',dataDate) #There are also [] in the JSON data, which are referenced with integers, # starting from [0] #Here, the [0] refers to the first day's block of data defined with []. DateDay0 = (FCData_Dic['SiteRep']['DV']['Location']['Period'][0]['value']) print('DateDay0 =',DateDay0) #The second [0] picks out each of the first day's forecast data, in this case the time, referenced by '$' TimeOfFC = (FCData_Dic['SiteRep']['DV']['Location']['Period'][0]['Rep'][0]['$']) print('TimeOfFC =',TimeOfFC) #Ditto for the temperature. Temperature = int((FCData_Dic['SiteRep']['DV']['Location']['Period'][0]['Rep'][0]['T'])) print('Temperature =',Temperature) #Ditto for the weather Type (a code number). WeatherType = int((FCData_Dic['SiteRep']['DV']['Location']['Period'][0]['Rep'][0]['W'])) print('WeatherType =',WeatherType) I hope this helps somebody!
This is the problem: FCDataStr = str(FCData) When you call str on a bytes object, what you get is the string representation of a bytes object—in quotes, with a b prefix, and with special characters backslash-escaped. If you wanted to decode the binary data to text, you have to do that with the decode method: FCDataStr = FCData.decode('utf-8') (I'm guessing UTF-8 because JSON is always supposed to be in UTF-8 unless otherwise specified.) In more detail: urllib.request.urlopen returns an http.client.HTTPResponse, which is a binary file-like object, (which implements io.RawIOBase). You can't pass that to json.load because it wants a text-file-like object—something with a read method that returns str, not bytes. You could wrap your HTTPResponse in an io.BufferedReader, then wrap than in an io.TextIOBase (with encoding='utf-8'), then pass that to json.load, but that's probably more work than you want to do. So, the simplest thing to do is exactly what you were trying to do, just using decode instead of str: data_bytes = response.read() data_str = data_bytes.decode('utf-8') data_dict = json.loads(data_str) Then, don't try to access the data in data_str—that's just a string, representing the JSON encoding of your data; data_dict is the actual data. For example, to find the dataDate of the DV of the SiteRep, you just do this: data_dict['SiteRep']['DV']['DataDate'] That will get you the string '2014-07-31T14:00:00Z'. You'll still probably want to convert to that to a datetime.datetime object (because JSON only understands a few basic types: strings, numbers, lists, and dicts). But it's still a lot better than trying to pick it out of data_str by find-ing or guessing at the offsets. My guess is that you've found some sample code written for Python 2.x, where you can convert between byte strings and Unicode strings just by calling the appropriate constructors, without specifying an encoding, which would default to sys.getdefaultencoding(), and often (at least on Mac or most modern Linux distros) that's UTF-8, so it just happened to work despite being wrong. In which case you may want to find some better sample code to learn from…
I been at parsing the Met Office datapoint output. Thanks to the response above I have something that works for me. I am writing the data I am interested in to a CSV file: import sys import os import urllib.request import json ### THIS IS THE CALL TO GET THE MET OFFICE FILE FROM THE INTERNET response = urllib.request.urlopen('http://datapoint.metoffice.gov.uk/public/data/val/wxobs/all/json/3351?res=hourly&?key=<my key>') FCData = response.read() FCDataStr = FCData.decode('utf-8') ### END OF THE CALL TO GET MET OFFICE FILE FROM THE INTERNET #Converts JSON data to a dictionary object FCData_Dic = json.loads(FCDataStr) # Open output file for appending fName=<my filename> if (not os.path.exists(fName)): print(fName,' does not exist') exit() fOut=open(fName, 'a') # Loop through each day, will nearly always be 2 days, # unless run at midnight. i = 0 j = 0 for k in range(24): # there will be 24 values altogether # find the first hour value for the first day DateZ = (FCData_Dic['SiteRep']['DV']['Location']['Period'][i]['value']) hhmm = (FCData_Dic['SiteRep']['DV']['Location']['Period'][i]['Rep'][j] ['$']) Temperature = (FCData_Dic['SiteRep']['DV']['Location']['Period'][i]['Rep'][j]['T']) Humidity = (FCData_Dic['SiteRep']['DV']['Location']['Period'][i]['Rep'][j]['H']) DewPoint = (FCData_Dic['SiteRep']['DV']['Location']['Period'][i]['Rep'][j]['Dp']) recordStr = '{},{},{},{},{}\n'.format(DateZ,hhmm,Temperature,Humidity,DewPoint) fOut.write(recordStr) j = j + 1 if (hhmm == '1380'): i = i + 1 j = 0 fOut.close() print('Records added to ',fName)`
win32com Excel data input error
I'm exporting results of my script into Excel spreadsheet. Everything works fine, I put big sets of data into SpreadSheet, but sometimes an error occurs: File "C:\Python26\lib\site-packages\win32com\client\dynamic.py", line 550, in __setattr__ self._oleobj_.Invoke(entry.dispid, 0, invoke_type, 0, value) pywintypes.com_error: (-2147352567, 'Exception.', (0, None, None, None, 0, -2146777998), None)*** I suppose It's not a problem of input data format. I put several different types of data strings, ints, floats, lists and it works fine. When I run the sript for the second time it works fine - no error. What's going on? PS. This is code that generates error, what's strange is that the error doesn't occur always. Say 30% of runs results in an error. : import win32com.client def Generate_Excel_Report(): Excel=win32com.client.Dispatch("Excel.Application") Excel.Workbooks.Add(1) Cells=Excel.ActiveWorkBook.ActiveSheet.Cells for i in range(100): Row=int(35+i) for j in range(10): Cells(int(Row),int(5+j)).Value="string" for i in range(100): Row=int(135+i) for j in range(10): Cells(int(Row),int(5+j)).Value=32.32 #float Generate_Excel_Report() The strangest for me is that when I run the script with the same code, the same input many times, then sometimes an error occurs, sometimes not.
This is most likely a synchronous COM access error. See my answer to Error while working with excel using python for details about why and a workaround. I can't see why the file format/extension would make a difference. You'd be calling the same COM object either way. My experience with this error is that it's more or less random, but you can increase the chances of it happening by interacting with Excel while your script is running.
edit: It doesn't change a thing. Error occurs, but leff often. Once in 10 simulations while with .xlsx file once in 3 simulations. Please help The problem was with the file I was opening. It was .xlsx , while I've saved it as .xls the problem disappeared. So beware, do not ever use COM interface with .xlsx or You'll get in trouble !
You should diseable excel interactivity while doing this. import win32com.client def Generate_Excel_Report(): Excel=win32com.client.Dispatch("Excel.Application") #you won't see what happens (faster) Excel.ScreenUpdating = False #clics on the Excel window have no effect #(set back to True before closing Excel) Excel.Interactive = False Excel.Workbooks.Add(1) Cells=Excel.ActiveWorkBook.ActiveSheet.Cells for i in range(100): Row=int(35+i) for j in range(10): Cells(int(Row),int(5+j)).Value="string" for i in range(100): Row=int(135+i) for j in range(10): Cells(int(Row),int(5+j)).Value=32.32 #float Excel.ScreenUpdating = True Excel.Interactive = True Generate_Excel_Report() Also you could do that to increase your code performance : #Construct data block string_line = [] for i in range(10) string_line.append("string") string_block = [] for i in range(100) string_block.append(string_line) #Write data block in one call ws = Excel.Workbooks.Sheets(1) ws.Range( ws.Cells(35, 5) ws.Cells(135,15) ).Values = string block
I had the same error while using xlwings for interacting with Excel. xlwings also use win32com clients in the backend. After some debugging, I realized that this error pops up whenever the code is executed and the excel file (containing data) is not in focus. In order to resolve the issue, I simply select the file which is being processed and run the code and it always works for me.