I have been cobbling together a script from multiple sources to poll my Raspberry Pi's internal SoC temperature. I want to then stream that data to my Plotly account.
I have a semi-working script but once the chip temp is read, it continues streaming that temperature indefinitely. As a noob to Python, I cannot seem to figure out how to take the temperature (on a user-set interval) and continuously update it with a fresh value. My code is below:
#!/usr/bin/env python
import plotly.plotly as py # plotly library
from plotly.graph_objs import * # all plotly graph objects
import json # used to parse config.json
import time # timer functions
import datetime
import os # used to acquire internal SoC temperature
import sys
# Initialize some variables with your credentials
with open('./config.json') as config_file:
plotly_user_config = json.load(config_file)
username = plotly_user_config['plotly_username']
api_key = plotly_user_config['plotly_api_key']
stream_token = plotly_user_config['plotly_streaming_tokens'][0]
# Initialize a Plotly Object
py.sign_in(username, api_key)
# Initialize your graph (not streaming yet)
data = [Scatter(
x=[],y=[],
mode='lines+markers',
stream={'token': stream_token, 'maxpoints': 1000},
name='UCBPD')
]
layout = Layout(
title='Raspberry Pi Temperature',
xaxis={'autorange': True, 'title': 'Time of Day'},
yaxis={'autorange': True, 'title': 'Degrees (Celsuis)'}
)
your_graph_url = py.plot(Figure(data=data, layout=layout), filename='Raspberry Pi Temp', auto_open=False)
# Acquire internal SoC temperature
cmd = '/opt/vc/bin/vcgencmd measure_temp'
line = os.popen(cmd).readline().strip()
if "error" in line:
print "Error ... is your firmware up-to-date? Run rpi-update"
else:
# line now contains something like: temp=41.2'C
# to get the temperature, split on =, and then on '
temp = line.split('=')[1].split("'")[0]
# Initialize the Plotly Streaming Object
stream = py.Stream(stream_token)
stream.open()
# Start looping and streaming!
while True:
stream.write({'x': datetime.datetime.now(), 'y': temp})
time.sleep(1) # delay between stream posts
Whatever code sets the value of the temp (temperature) variable needs to be in the while loop or else there is no way the temperature variable can change.
If you continue to have trouble you should simplify this by removing the config file and the graphing, and just print raw readings to the console.
Related
I am new to trading_ig library.
I have connected to my live IG account.
from trading_ig import IGService
from trading_ig_config import config
ig_service = IGService(config.username, config.password, config.api_key, config.acc_type)
ig_service.create_session()
#account_info = ig_service.switch_account(config.acc_number, False) # not necessary
#print(account_info)
open_positions = ig_service.fetch_open_positions()
print("open_positions:\n%s" % open_positions)
print("")
I have been able to download the latest 100 days of an index:
epic = 'IX.D.SPTRD.DAILY.IP'
resolution = 'D'
num_points = 100
response = ig_service.fetch_historical_prices_by_epic_and_num_points(epic, resolution, num_points)
df_ask = response['prices']['ask']
print("ask prices:\n%s" % df_ask)
Now, based on an algorithm that I have created, I want to be able to open and close a position automatically by letting the python code continuously run?
Do I need to use Lightstreamer?
How I open and close a position in Python?
Again, I am new to this and the IG documentation didn't provide much information.
Thanks for your help.
Issue in Brief
I have recently started using an azure server running ubuntu 20.04. My workflow includes running around 50 python scripts 24/7 and they are operationally very important to my team. But the issue when I just start those python scripts my ram usage is nominal like 12/16 GB remains free in my system after running all my scripts.
But slowly RAM usage by those scripts starts increasing to the point where the system starts to kill them to free up some main memory.
I have no idea what the issue is over here. My scripts are pretty simple and I really don't know where and how do I resolve this issue. Can anyone please show/tell me some guidelines on how do I approach solving this issue?
Comments
I am using python 3.10. Script function is to download the data from some server and upload it to my MySQL database. I can provide the code if anyone asks for it.
Let me know if I can provide anything else to make this easier for you.
Code files
I am uploading the code which is taking up the maximum memory according to htop.
dcx_trades.py
import json
import time
import datetime
from mysql_connector import SQLConnector
import pandas as pd
import sys
import os
import signal
from contextlib import contextmanager
def raise_timeout(signum, frame):
print("timeout")
raise Exception("timouttt")
#contextmanager
def timeout(time):
# Register a function to raise a TimeoutError on the signal.
signal.signal(signal.SIGALRM, raise_timeout)
# Schedule the signal to be sent after ``time``.
signal.alarm(time)
try:
yield
except TimeoutError:
# exit()
pass
finally:
# Unregister the signal so it won't be triggered
# if the timeout is not reached.
signal.signal(signal.SIGALRM, signal.SIG_IGN)
from_db = {'user': 'db_user_name', 'password': 'password', 'host': 'host_url', 'database': 'crypto'}
s = SQLConnector('crypto', from_db)
dict_ = {'timestamp': '', "exchange": "coindcx", "symbol":"", 'error_msg':''}
df = pd.DataFrame(columns = ["exchange_id","timestamp","symbol","price","quantity","exchange","turnover"])
df.set_index('symbol')
while True:
try:
data = pd.read_csv('dcx_trades.csv')
trades = data.to_dict(orient='records')
data = data.iloc[0:0]
if len(trades):
for trade in trades:
utc_time = datetime.datetime.fromtimestamp(trade['T']/1000, datetime.timezone.utc)
local_time = utc_time.astimezone()
datetime_formatted = local_time.strftime("%Y-%m-%d %H:%M:%S")
dict_['timestamp'] = datetime_formatted
dict_["exchange_id"] = 12345
dict_["symbol"] = trade['s']
dict_['price'] = trade['p']
dict_['quantity'] = trade['q']
dict_['turnover'] = float(trade['p'])*float(trade['q'])
dict_['error'] = '0'
df = df.append(dict_, ignore_index=True)
print(df)
df_new = df
df_new= df_new.to_dict(orient='records')
df = df.iloc[0:0]
data.to_csv('dcx_trades.csv', mode='w', index=False)
if len(df_new):
with timeout(60):
try:
print(datetime.datetime.now())
s.add_multipletrades(df_new)
print(datetime.datetime.now())
except Exception as e:
print(e)
os.execv(sys.executable, ['python'] + sys.argv)
print("error_time:", datetime.datetime.now())
except Exception as e:
data = pd.read_csv('dcx_trades.csv')
data = data.loc[1:]
data.to_csv('dcx_trades.csv',index=False)
pass
Objective of the file:
Firstly s = SQLConnector('crypto', from_db) this lines makes the connection with the DB. All the database-related functions have been defined in another file named mysql_connector.py which I have imported in the beginning.
Then the code reads from the CSV file named dcx_trades.csv and preprocesses the data as per the database table. Before uploading the data into DB it clears the CSV file so as to remove duplicates. timeout(60) function is used because sometimes the file get stuck while writing into the DB and thus it needs to get restarted which is what timeout() function does.
All of those transforms can easily be done in SQL --
LOAD DATA into a temp table with whatever columns match the columns and datatypes in the file
Run a single INSERT .. SELECT .. to copy the values over, doing whatever expressions are needed (such as p * q).
I have a python application that writes some data to AWS elastic cache cluster at a regular interval.
Here's an example script that simulates the functionality of my application. It also replicates the error that I have been facing.
from datetime import datetime
import redis
import time
import sys
import random
class Config:
default_host = "localhost"
master_host = "xxx.xx.0001.xxxx.cache.amazonaws.com"
replica_host = "xxx.xx.0001.xxxx.cache.amazonaws.com"
redis_db = 8
socket_conn_timeout = 10
request_delay_sec = 0.1
def get_redis_client():
return redis.Redis(
host=Config.master_host,
db=Config.redis_db,
socket_connect_timeout=Config.socket_conn_timeout,
)
def get_random_key_value():
val = time.time()
key = "test_key_" + str(random.randint(0, 100))
return key, val
r = get_redis_client()
r.flushdb()
flag = False
while True:
try:
if flag:
print("beat:", time.time())
r.set(*get_random_key_value())
time.sleep(Config.request_delay_sec)
except redis.RedisError as re:
print(datetime.now(), "Error:", type(re), re)
flag = True
# sys.exit()
except KeyboardInterrupt:
print("Stopping loop execution")
sys.exit()
Here are the environment details of my application
Python(v 3.7.0)
Redis-py(v 3.5.3)
AWS elastic cache(cluster mode disabled, 1 master node, 1 read replica)
When I scale my AWS elastic cache cluster vertically and the above script is running, I get the following error for few seconds while the cluster scaling-up is in process and then it goes away.
<class 'redis.exceptions.ReadOnlyError'> You can't write against a read only replica.
AWS docs also states that during vertical scaling process some inconsistencies may occur because of data syncing(https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/redis-cluster-vertical-scaling.html)
Has anyone faced any similar issue or can explain why this error occurs during the scale up process? how can it be fixed?
EDIT:
I tried the same thing with a golang script and it works perfectly fine.
I have a python script which collects data(solar,battery,load,currenttime) from my solar controller via modbus and writes the data onto my oracle DB. The current time is from my system time. At the moment the code runs 24/7 (with one minute interval) and if the connection is lost the code tries 5 times to reconnect, and then stops executing.
My Goal :At the moment i have blank data as the script stops when connection is out. Is there a way i can keep the script running (even when there is no connection ) and put values like 0 into the solar and battery values and pass the current time regularly to the database so i can plot graphs based on 24 hours. And is there a way that my script can reconnect automatically as well? Thanks a lot for your precious time !!
Or is there a way to keep the script running even though there is no modbus connection. Then i can write if no connection input values 0 to battery and solar.
My code
from pymodbus.constants import Defaults
from pymodbus.constants import Endian
from pymodbus.client.sync import ModbusTcpClient as ModbusClient
from pymodbus.payload import BinaryPayloadDecoder
from datetime import datetime
from datetime import date
import schedule
import time
import cx_Oracle
con = cx_Oracle.connect("user/pas") #connects to database
cursor = con.cursor()
Defaults.Timeout = 25
Defaults.Retries = 5
def loaddata():
today = date.today()
now = datetime.now()
client = ModbusClient('10.8.0.14', port='83')
result = client.read_input_registers(840, 2) # Read battery voltage
trailerone = 1
decoder = BinaryPayloadDecoder.fromRegisters(result.registers, byteorder=Endian.Big)
voltage=decoder.decode_16bit_uint()
####################### Battery voltage ####################################
battery1 = str("{0:.1f}".format(voltage/10.0)) #battery1 has current battery voltage
#############################################################################
result2 = client.read_input_registers(776,3,unit=226) # Read Solar data
decoder2 = BinaryPayloadDecoder.fromRegisters(result2.registers, byteorder=Endian.Big)
voltage2=decoder2.decode_16bit_uint()
#######################Solar voltage#####################################
solar1 = str("{0:.1f}".format(voltage2/100.0)) # has solar voltage
##########################################################################
result3 = client.read_input_registers(860, 1) #Load Watts data
decoder3 = BinaryPayloadDecoder.fromRegisters(result3.registers, byteorder=Endian.Big)
voltage3=decoder3.decode_16bit_uint()
#########################LOAD, DATE , TIME #####################################
load1 = str("{0:.1f} W".format(voltage3)) # has load wattage
date1 = today.strftime("%d/%m/%Y") # has todays date
current_time = now.strftime("%H:%M:%S") # has current time
#################################################################################
##################### Inserting values into database table ####################
cursor.execute('INSERT INTO Mytable(TRAILERID,PANELVOLTAGE,BATTERYVOLTAGE,DATADATE,DATATTIME) VALUES ( :trailerone,:solar1,:battery1,:date1, :current_time)',[trailerone,solar1,battery1,date1,current_time])
con.commit()
f.close()
schedule.every(60).seconds.do(loaddata) #runs script every one minute endlessly
while 1:
schedule.run_pending()
time.sleep(1)
schedule.every(60).seconds.do(loaddata)
Instead of using schedule lib, you should choose cronjob in Linux system.
Add try...catch around loaddata, When exception happen, save default 0 to solar.
To anyone wondering how to do it. I have used the try and except method to implement what i was looking for.
So this is my very first attempt at Python and programming the Raspberry Pi. My small project is to light an LED when I get a mention on Twitter. All very simple and the code shown below works well. My question relates to storing the previous mentions in a text file instead of a variable. Essentially the code checks the printed_ids variable for the list of tweet.ids that have already been seen so as to prevent the LED's from just continually flashing every time the program is re-run. My plan is to run the python code in a scheduled job but I don't want to be in a situation where every time I restart the Pi and run the program, the program has to go through all my mentions and write each occurrence to the printed_ids variable.
So, my thought was to write them instead to a text file so as the program survives a reboot.
Any thoughts/recommendations?
Thanks for your help.
import sys
import tweepy
import RPi.GPIO as GPIO ## Import GPIO library
import time ## Import 'time' library. Allows use of 'sleep'
GPIO.setmode(GPIO.BOARD) ## Use board pin numbering
CONSUMER_KEY = '******************'
CONSUMER_SECRET = '*****************'
ACCESS_KEY = '**********************'
ACCESS_SECRET = '*********************'
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_KEY, ACCESS_SECRET)
api = tweepy.API(auth)
speed = 2
printed_ids = []
while True:
for tweet in api.mentions_timeline():
if tweet.id not in printed_ids:
print "#%s: %s" % (tweet.author.screen_name, tweet.text)
GPIO.setup(7,GPIO.OUT) ## Setup GPIO Pin 7 to OUT
GPIO.output(7,True)## Switch on pin 7
time.sleep(speed)## Wait
GPIO.output(7,False)## Switch off pin 7
f.open('out','w')
f.write(tweet.id)
##printed_ids.append(tweet.id)
GPIO.cleanup()
time.sleep(60) # Wait for 60 seconds.
What you're looking for is called "serialization" and Python provides many options for that. Perhaps the simplest and the most portable one is the json module
import json
# read:
with open('ids.json', 'r') as fp:
printed_ids = json.load(fp)
# #TODO: handle errors if the file doesn't exist or is empty
# write:
with open('ids.json', 'w') as fp:
json.dump(printed_ids, fp)