RuntimeError: input(): lost sys.stdin from executable file - python

Seeking your help regarding this executable file I have converted from my .py file using auto-py-to-exe. I have this code below that I made for my csv automation report. Looks fine when running on IDE and CMD but when I tried to convert it to .exe this what happens.
Traceback (most recent call last):
File "new.py", line 7, in <module>
input_file = input("Enter the file name of your HC file: ")
RuntimeError: input(): lost sys.stdin
Here is my code below for your reference. Hoping you could help me with this issue.
import pandas as pd
import numpy as np
print("Fixed Network Health Check Checker")
input_file = input("Enter the file name of your HC file: ")
file = input_file + str('.xlsx')
df = pd.read_excel(file, sheet_name = 'MSAN Cabinets')
print("Done")
#fixed
df['MSAN Interface'] = df['MSAN Interface'].replace(np.nan, 0)
df['ACCESS Interface 1 (IPRAN, ATN, LSA, VLAN)'] = df['ACCESS Interface 1 (IPRAN, ATN, LSA, VLAN)'].replace(np.nan, 0)
df['Homing AG2'] = df['Homing AG1'].replace(np.nan, 0)
df = df.iloc[:11255]
# filter "REGION" and drop unnecessary columns
f_df1 = df[df['REGION'] == 'MIN']
dropcols_df1 = f_df1.drop(df.iloc[:, 1:6], axis = 1)
dropcols_df2 = dropcols_df1.drop(df.iloc[:, 22:27], axis = 1)
dropcols_df3 = dropcols_df2.drop(df.iloc[:, 37:50], axis = 1)
# filter "MSAN Interface" and filter the peak util for >= 50%
f_d2 = dropcols_df3['MSAN Interface'] != 0
msan_int = dropcols_df3[f_d2]
f_msan_int = msan_int['Peak Util'] >= 0.5
new_df = msan_int[f_msan_int]
# filter "ACCESS Interface 1 (IPRAN, ATN, LSA, VLAN)" and filter the peak util for >= 50%
fblank_msan_int = dropcols_df3['MSAN Interface'] == 0
msan_int1 = dropcols_df3[fblank_msan_int]
f_df3 = dropcols_df3['ACCESS Interface 1 (IPRAN, ATN, LSA, VLAN)'] != 0
access_int1 = dropcols_df3[f_df3]
f_access_int1 = access_int1['Peak Util.1'] >= 0.5
new_df1 = access_int1[f_access_int1]
# filter "Homing AG1" and filter the peak util for >= 50%
fblank_msan_int1 = dropcols_df3['MSAN Interface'] == 0
msan_int2 = dropcols_df3[fblank_msan_int1]
f_access_int2 = msan_int2['ACCESS Interface 1 (IPRAN, ATN, LSA, VLAN)'] == 0
new_df2 = msan_int2[f_access_int2]
ag1 = new_df2['Peak Util.3'] >= 0.5
new_df3 = new_df2[ag1]
# Concatenate all DataFrames
pdList = [new_df, new_df1, new_df3]
final_df = pd.concat(pdList)
print(final_df.to_csv('output.csv', index = False))
Thank you. Btw I'm new in Python :).

Related

I am getting 'index out of bound error' when reading from csv in pandas but not when I extract the data via api. What could be the reason?

So for my bot, I am first extracting data via api and storing it in csv. When I run my for loop on data via api, it gives no error and runs smoothly.
But when the csv file is read and run, it gives out of bound error.
This is my function to generate data:
full_list = pd.DataFrame(columns=("date","open","high","low","close","volume","ticker","RSI","ADX","20_sma","max_100"))
def stock_data(ticker):
create_data = fetchOHLC(ticker,'minute',60)
create_data["ticker"] = ticker
create_data["RSI"] = round(rsi(create_data,25),2)
create_data["ADX"] = round(adx(create_data,14),2)
create_data["20_sma"] = round(create_data.close.rolling(10).mean().shift(),2)
create_data["max_100"] = create_data.close.rolling(100).max().shift()
create_data.dropna(inplace=True,axis=0)
create_data.reset_index(inplace=True)
return create_data
stocklist = open("stocklist.txt","r+")
tickers = stocklist.readlines()
for x in tickers:
try:
full_list = full_list.append(stock_data(x.strip()))
except:
print(f'{x.strip()} did not work')
full_list.to_csv("All_Data")
full_list
So when I run the same code below on dataframe created I got no error. But when I run the same code on the csv file, I get out of bound error.
list_tickers = full_list["ticker"].unique()
for y in list_tickers[:2]:
main = full_list[full_list["ticker"]==y]
pos = 0
num = 0
tick = y
signal_time = 0
signal_rsi = 0
signal_adx = 0
buy_time = 0
buy_price = 0
sl = 0
#to add trailing sl in this.
for x in main.index:
maxx = main.iloc[x]["max_100"]
rsi = main.iloc[x]["RSI"]
adx = main.iloc[x]["ADX"]
sma = main.iloc[x]["20_sma"]
close = main.iloc[x]["close"]
high = main.iloc[x]["high"]
if rsi > 80 and adx > 35 and close > maxx:
if pos == 0:
buy_price = main.iloc[x+1]["open"]
buy_time = main.iloc[x+1]["date"]
pos=1
signal_time = main.iloc[x]["date"]
signal_rsi = main.iloc[x]["RSI"]
signal_adx = main.iloc[x]["ADX"]
elif close < sma:
if pos == 1:
sell_time = main.iloc[x]["date"]
sell_price = sma*.998
pos=0
positions.loc[positions.shape[0]] = [y,signal_time,signal_rsi,signal_adx,buy_time,buy_price,sell_time,sell_price]
Any idea why?
Here is a cleanup and file call code:
full_list = pd.read_csv("All_data")
full_list.dropna(inplace=True,axis=0)
full_list.drop(labels="Unnamed: 0",axis=1) < index of previous dataframe
full_list.head(5)
Thanks

Python Chess Data (FEN) into Stockfish for Python

I am trying to use stockfish to evaluate a chess position using FEN notation all in Python. I am mainly using two libraries (pgnToFen I found on github here: https://github.com/SindreSvendby/pgnToFen and Stockfish the MIT licensed one here: https://github.com/zhelyabuzhsky/stockfish). After many bugs I have reached problem after problem. Stockfish not only can't analyse this FEN position (3b2k1/1p3pp1/8/3pP1P1/pP3P2/P2pB3/6K1/8 b f3 -) but it infinitely loops! "No worries!" and thought changing the source code would be accomplishable. Changed to _put(), but basically I am unable to put dummy values in because stdin.flush() won't execute once I give it those values! Meaning I don't even think I can skip to the next row in my dataframe. :( The code I changed is below.
def _put(self, command: str, tmp_time) -> None:
if not self.stockfish.stdin:
raise BrokenPipeError()
self.stockfish.stdin.write(f"{command}\n")
try:
self.stockfish.stdin.flush()
except:
if command != "quit":
self.stockfish.stdin.write('isready\n')
try:
time.sleep(tmp_time)
self.stockfish.stdin.flush()
except:
#print ('Imma head out', file=sys.stderr)
raise ValueError('Imma head out...')
#sys.stderr.close()
def get_evaluation(self) -> dict:
"""Evaluates current position
Returns:
A dictionary of the current advantage with "type" as "cp" (centipawns) or "mate" (checkmate in)
"""
evaluation = dict()
fen_position = self.get_fen_position()
if "w" in fen_position: # w can only be in FEN if it is whites move
compare = 1
else: # stockfish shows advantage relative to current player, convention is to do white positive
compare = -1
self._put(f"position {fen_position}", 5)
self._go()
x=0
while True:
x=x+1
text = self._read_line()
#print(text)
splitted_text = text.split(" ")
if splitted_text[0] == "info":
for n in range(len(splitted_text)):
if splitted_text[n] == "score":
evaluation = {
"type": splitted_text[n + 1],
"value": int(splitted_text[n + 2]) * compare,
}
elif splitted_text[0] == "bestmove":
return evaluation
elif x == 500:
evaluation = {
"type": 'cp',
"value": 10000,
}
return evaluation
and last but not least change to the init_ contructor below:
self._stockfish_major_version: float = float(self._read_line().split(" ")[1])
And the code where I am importing this code to is below, this is where errors pop up.
import pandas as pd
import re
import nltk
import numpy as np
from stockfish import Stockfish
import os
import sys
sys.path.insert(0, r'C:\Users\path\to\pgntofen')
import pgntofen
#nltk.download('punkt')
#Changed models.py for major version line 39 in stockfish from int to float
stockfish = Stockfish(r"C:\Users\path\to\Stockfish.exe")
file = r'C:\Users\path\to\selenium-pandas output.csv'
chunksize = 10 ** 6
for chunk in pd.read_csv(file, chunksize=chunksize):
for index, row in chunk.iterrows():
FullMovesStr = str(row['FullMoves'])
FullMovesStr = FullMovesStr.replace('+', '')
if "e.p" in FullMovesStr:
row.to_csv(r'C:\Users\MyName\Logger.csv', header=None, index=False, mode='a')
print('Enpassant')
continue
tokens = nltk.word_tokenize(FullMovesStr)
movelist = []
for tokenit in range(len(tokens)):
if "." in str(tokens[tokenit]):
try:
tokenstripped = re.sub(r"[0-9]+\.", "", tokens[tokenit])
token = [tokenstripped, tokens[tokenit+1]]
movelist.append(token)
except:
continue
else:
continue
DFMoves = pd.DataFrame(movelist, columns=[['WhiteMove', 'BlackMove']])
DFMoves['index'] = row['index']
DFMoves['Date'] = row['Date']
DFMoves['White'] = row['White']
DFMoves['Black'] = row['Black']
DFMoves['W ELO'] = row['W ELO']
DFMoves['B ELO'] = row['B ELO']
DFMoves['Av ELO'] = row['Av ELO']
DFMoves['Event'] = row['Event']
DFMoves['Site'] = row['Site']
DFMoves['ECO'] = row['ECO']
DFMoves['Opening'] = row['Opening']
pd.set_option('display.max_rows', DFMoves.shape[0]+1)
print(DFMoves[['WhiteMove', 'BlackMove']])
seqmoves = []
#seqmovesBlack = []
evalmove = []
pgnConverter = pgntofen.PgnToFen()
#stockfish.set_fen_position("rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1")
#rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1
for index, row in DFMoves.iterrows():
try:
stockfish.set_fen_position("rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1")
except:
evalmove.append("?")
continue
#stockfish.set_fen_position("rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1")
pgnConverter.resetBoard()
WhiteMove = str(row['WhiteMove'])
BlackMove = str(row['BlackMove'])
if index == 0:
PGNMoves1 = [WhiteMove]
seqmoves.append(WhiteMove)
#seqmoves.append(BlackMove)
else:
seqmoves.append(WhiteMove)
#seqmoves.append(BlackMove)
PGNMoves1 = seqmoves.copy()
#print(seqmoves)
try:
pgnConverter.pgnToFen(PGNMoves1)
fen = pgnConverter.getFullFen()
except:
break
try:
stockfish.set_fen_position(fen)
print(stockfish.get_board_visual())
evalpos = stockfish.get_evaluation()
evalmove.append(evalpos)
except:
pass
try:
stockfish.set_fen_position("rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1")
except:
evalmove.append("?")
continue
pgnConverter.resetBoard()
if index == 0:
PGNMoves2 = [WhiteMove, BlackMove]
seqmoves.append(BlackMove)
else:
seqmoves.append(BlackMove)
PGNMoves2 = seqmoves.copy()
try:
pgnConverter.pgnToFen(PGNMoves2)
fen = pgnConverter.getFullFen()
except:
break
try:
stockfish.set_fen_position(fen)
print(stockfish.get_board_visual())
evalpos = stockfish.get_evaluation()
print(evalpos)
evalmove.append(evalpos)
except:
pass
#DFMoves['EvalWhite'] = evalwhite
#DFMoves['EvalBlack'] = evalblack
print(evalmove)
So the detailed question is getting stockfish.get_evalution() to just skip, or better yet fix the problem, for this ( 3b2k1/1p3pp1/8/3pP1P1/pP3P2/P2pB3/6K1/8 b f3 - ) FEN position. I have been working on this problem for quite a while so any insight into this would be very much appreciated.
My specs are Windows 10, Python 3.9, Processor:Intel(R) Core(TM) i9-10980XE CPU # 3.00GHz 3.00 GHz and RAM is 64.0 GB.
Thanks :)
Ok. It seems your fen is invalid (3b2k1/1p3pp1/8/3pP1P1/pP3P2/P2pB3/6K1/8 b f3 -). So check that. And python-chess (https://python-chess.readthedocs.io/en/latest/index.html) library allows you to use FEN AND chess engines. So, pretty cool no ? Here is an example of theses two fantastics tools :
import chess
import chess.engine
import chess.pgn
pgn = open("your_pgn_file.pgn")
game = chess.pgn.read_game(pgn)
engine = chess.engine.SimpleEngine.popen_uci("your_stockfish_path.exe")
# Iterate through all moves, play them on a board and analyse them.
board = game.board()
for move in game.mainline_moves():
board.push(move)
print(engine.analyse(board, chess.engine.Limit(time=0.1))["score"])

How can i create an input for choosing different files to access?

I am quite new to python so please bear with me.
Currently, this is my code:
import pandas as pd
import statistics
import matplotlib.pyplot as plt
import math
from datetime import datetime
start_time = datetime.now()
gf = pd.read_csv(r"/Users/aaronhuang/Documents/Desktop/ffp/exfileCLEAN2.csv",
skiprows=[1])
bf = pd.read_csv(r"/Users/aaronhuang/Documents/Desktop/ffp/2SeconddatasetCLEAN.csv",
skiprows=[1])
df = (input("Which data set? "))
magnitudes = (df['Magnitude '].values)
times = df['Time '].values
average = statistics.mean(magnitudes)
sd = statistics.stdev(magnitudes)
below = sd * 3
class data_set:
def __init__(self, index):
self.mags = []
self.i = index
self.mid_time = df['Time '][index]
self.mid_mag = df['Magnitude '][index]
self.times = []
ran = 80
for ii in range(ran):
self.times.append(df['Time '][self.i + ii - ran / 2])
self.mags.append(df['Magnitude '][self.i + ii - ran / 2])
data = []
today = float(input("What is the range? "))
i = 0
while (i < len(df['Magnitude '])):
if (abs(df['Magnitude '][i]) <= (average - below)):
# check if neighbours
t = df['Time '][i]
tt = True
for d in range(len(data)):
if abs(t - data[d].mid_time) <= today:
# check if closer to center
if df['Magnitude '][i] < data[d].mid_mag:
data[d] = data_set(i)
print("here")
tt = False
break
if tt:
data.append(data_set(i))
i += 1
print("found values")
# graphing
height = 2 # Change this for number of columns
width = math.ceil(len(data) / height)
if width < 2:
width = 2
fig, axes = plt.subplots(width, height, figsize=(30, 30))
row = 0
col = 0
for i in range(len(data)):
axes[row][col].plot(data[i].times, data[i].mags)
col += 1
if col > height - 1:
col = 0
row += 1
plt.show()
end_time = datetime.now()
print('Duration: {}'.format(end_time - start_time))
Currently, the error produced is this:
/Users/aaronhuang/.conda/envs/EXTTEst/bin/python "/Users/aaronhuang/PycharmProjects/EXTTEst/Code sandbox.py"
Which data set? gf
Traceback (most recent call last):
File "/Users/aaronhuang/PycharmProjects/EXTTEst/Code sandbox.py", line 14, in <module>
magnitudes = int(df['Magnitude '].values)
TypeError: string indices must be integers
Process finished with exit code 1
I am trying to have the user be able to choose which file to access to perform the rest of the code on.
So if the user types gf I would like the code to access the first data file.
Any help would be appreciated. Thank you
Why not use an if-statement at the beginning? Try this:
instead of:
gf = pd.read_csv(r"/Users/aaronhuang/Documents/Desktop/ffp/exfileCLEAN2.csv",
skiprows=[1])
bf = pd.read_csv(r"/Users/aaronhuang/Documents/Desktop/ffp/2SeconddatasetCLEAN.csv",
skiprows=[1])
df = (input("Which data set? "))
Use this:
choice = input("Which data set? ")
if choice == "gf":
df = pd.read_csv(r"/Users/aaronhuang/Documents/Desktop/ffp/exfileCLEAN2.csv",
skiprows=[1])
elif choice == "bf":
df = pd.read_csv(r"/Users/aaronhuang/Documents/Desktop/ffp/2SeconddatasetCLEAN.csv",
skiprows=[1])
else:
print("Error. Your choice is not valid")
df = ""
break

Python: invalid syntax: <string>, line 1, pos 16

I have developed a code in Python in which -in order to run the program- I need to take some arguments from the command line. But I am getting continuously the same error:
Traceback (most recent call last):
File "<string>", line 1, in <fragment>
invalid syntax: <string>, line 1, pos 16
I have the faintest idea what is wrong with my code. So, I present my code below in case someone could help me:
import QSTK.qstkutil.qsdateutil as du
import QSTK.qstkutil.tsutil as tsu
import QSTK.qstkutil.DataAccess as da
import datetime as dt
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import time
import math
import copy
import QSTK.qstkstudy.EventProfiler as ep
import csv
import sys
import argparse
def readData(li_startDate, li_endDate, ls_symbols):
#Create datetime objects for Start and End dates (STL)
dt_start = dt.datetime(li_startDate[0], li_startDate[1], li_startDate[2])
dt_end = dt.datetime(li_endDate[0], li_endDate[1], li_endDate[2])
#Initialize daily timestamp: closing prices, so timestamp should be hours=16 (STL)
dt_timeofday = dt.timedelta(hours=16)
#Get a list of trading days between the start and end dates (QSTK)
ldt_timestamps = du.getNYSEdays(dt_start, dt_end, dt_timeofday)
#Create an object of the QSTK-dataaccess class with Yahoo as the source (QSTK)
c_dataobj = da.DataAccess('Yahoo', cachestalltime=0)
#Keys to be read from the data
ls_keys = ['open', 'high', 'low', 'close', 'volume', 'actual_close']
#Read the data and map it to ls_keys via dict() (i.e. Hash Table structure)
ldf_data = c_dataobj.get_data(ldt_timestamps, ls_symbols, ls_keys)
d_data = dict(zip(ls_keys, ldf_data))
return [d_data, dt_start, dt_end, dt_timeofday, ldt_timestamps]
def marketsim(cash,orders_file,values_file):
orders = pd.read_csv(orders_file,index_col='Date',parse_dates=True,header=None)
ls_symbols = list(set(orders['X.4'].values))
df_lastrow = len(orders) - 1
dt_start = dt.datetime(orders.get_value(0, 'X.1'),orders.get_value(0, 'X.2'),orders.get_value(0, 'X.3'))
dt_end = dt.datetime(orders.get_value(df_lastrow, 'X.1'),orders.get_value(df_lastrow, 'X.2'),orders.get_value(df_lastrow, 'X.3') + 1 )
#d_data = readData(dt_start,dt_end,ls_symbols)
#Initialize daily timestamp: closing prices, so timestamp should be hours=16 (STL)
dt_timeofday = dt.timedelta(hours=16)
#Get a list of trading days between the start and end dates (QSTK)
ldt_timestamps = du.getNYSEdays(dt_start, dt_end, dt_timeofday)
#Create an object of the QSTK-dataaccess class with Yahoo as the source (QSTK)
c_dataobj = da.DataAccess('Yahoo', cachestalltime=0)
#Keys to be read from the data
ls_keys = ['open', 'high', 'low', 'close', 'volume', 'actual_close']
#Read the data and map it to ls_keys via dict() (i.e. Hash Table structure)
df_data = c_dataobj.get_data(ldt_timestamps, ls_symbols, ls_keys)
d_data = dict(zip(ls_keys, ldf_data))
ls_symbols.append("_CASH")
trades = pd.Dataframe(index=list(ldt_timestamps[0]),columns=list(ls_symbols))
current_cash = cash
trades["_CASH"][ldt_timestamps[0]] = current_cash
current_stocks = dict()
for symb in ls_symbols:
current_stocks[symb] = 0
trades[symb][ldt_timestamps[0]] = 0
for row in orders.iterrows():
row_data = row[1]
current_date = dt.datetime(row_data['X.1'],row_data['X.2'],row_data['X.3'],16)
symb = row_data['X.4']
stock_value = d_data['close'][symb][current_date]
stock_amount = row_data['X.6']
if row_data['X.5'] == "Buy":
current_cash = current_cash - (stock_value*stock_amount)
trades["_CASH"][current_date] = current_cash
current_stocks[symb] = current_stocks[symb] + stock_amount
trades[symb][current_date] = current_stocks[symb]
else:
current_cash = current_cash + (stock_value*stock_amount)
trades["_CASH"][current_date] = current_cash
current_stocks[symb] = current_stocks[symb] - stock_amount
trades[symb][current_date] = current_stocks[symb]
#trades.fillna(method='ffill',inplace=True)
#trades.fillna(method='bfill',inplace=False)
trades.fillna(0)
#alt_cash = current_cash
#alt_cash = trades.cumsum()
value_data = pd.Dataframe(index=list(ldt_timestamps),columns=list("V"))
value_data = value_data.fillna(0)
value_data = value_data.cumsum(axis=0)
for day in ldt_timestamps:
value = 0
for sym in ls_symbols:
if sym == "_CASH":
value = value + trades[sym][day]
else:
value = calue + trades[sym][day]*d_data['close'][sym][day]
value_data["V"][day] = value
fileout = open(values_file,"w")
for row in value_data.iterrows():
file_out.writelines(str(row[0].strftime('%Y,%m,%d')) + ", " + str(row[1]["V"].round()) + "\n" )
fileout.close()
def main(argv):
if len(sys.argv) != 3:
print "Invalid arguments for marketsim.py. It should be of the following syntax: marketsim.py orders_file.csv values_file.csv"
sys.exit(0)
#initial_cash = int (sys.argv[1])
initial_cash = 1000000
ordersFile = str(sys.argv[1])
valuesFile = str(sys.argv[2])
marketsim(initial_cash,ordersFile,valuesFile)
if __name__ == "__main__":
main(sys.argv[1:])
The input I gave to the command line was:
python marketsim.py orders.csv values.csv
I guess that the problem lies either into the imports or probably into the main function(incl. the if below the def main(argv)
I have to point out that the files orders.csv and values.csv exist and are located into the same folder.
I hope have made everything clear.
So, I am looking forward to reading your answers community-mates! :D
Thank you!

Running multiple functions based on one imported excel dataframe

I have created multiple functions for each client, They are all basically the same (it only changes the workers names inside of each one of them).
It runs the first functions as it should, but then the second one does not work. It's like the dataframe isn't being carried out throughout the code.
import pandas as pd
import sys
#file loc
R1 = input('Data do Relatório desejado (dd.mm) ---> ')
loc = r'C:\Users\lucas.mascia\Downloads\relatorio-{0}.xlsx'.format(R1)
#opening file with exact needed columns
df = pd.read_excel(loc)
df = df[[2,15,16,17]]
[...]
def func1():
global df, R1, bcsulp1, bcsulp2
#List of solicitantes in Postal Saude
list_sol = [lista_solic["worker1"]]
#filter Postal Saude Solicitantes
df = df[(df['Client']==lista_clientes[2])
& (df['worker'].isin(list_sol))]
#Alphabetical order
df = df.sort_index(by=['worker', 'place'])
#Grouping data of column
gp = df.groupby('worker')
# Loop?
for i in range(1,2):
df = gp.get_group(list_sol[(i-1)])
#Name_i #############################################################
#Protocolo interno e externo --------------------------------------------
p_interno = df[(df['place'].str.contains("C. Martins"))
& (df['task']==lista_tarefas[1])]
globals()['fi'+str(i)] = len(p_interno)
f_i = globals()['fi'+str(i)]
p_externo = df[(~df['place'].str.contains("C. Martins"))
& (df['task']==lista_tarefas[1])]
globals()['fe'+str(i)] = len(p_externo)
f_e = globals()['fe'+str(i)]
#Protocolo Virtual interno e externo ------------------------------------
pv_interno = df[(df['place'].str.contains("C. Martins"))
& (df['task']==lista_tarefas[3])]
globals()['vi'+str(i)] = len(pv_interno)
v_i = globals()['vi'+str(i)]
pv_externo = df[(~df['place'].str.contains("C. Martins"))
& (df['task']==lista_tarefas[3])]
globals()['ve'+str(i)] = len(pv_externo)
v_e = globals()['ve'+str(i)]
#Protocolo postal normal e especial
pp_normal = df[(df['task']==lista_tarefas[51])]
pp_especial = df[(df['task']==lista_tarefas[52])]
globals()['postal'+str(i)] = len(pp_especial) + len(pp_normal)
post = globals()['postal'+str(i)]
#Copia integral e parcial 6,1 - 6,2
copia_i = df[(df['task']==lista_tarefas[61])]
copia_p = df[(df['task']==lista_tarefas[62])]
globals()['copia'+str(i)] = len(copia_p) + len(copia_i)
cop = globals()['copia'+str(i)]
#Copia eletronica
copia_elet = df[(df['task']==lista_tarefas[7])]
globals()['copia_elet'+str(i)] = len(copia_elet)
cop_e = globals()['copia_elet'+str(i)]
#AIJ / Audiencia / Conciliatoria
aij = df[(df['task']==lista_tarefas[81])]
aud = df[(df['task']==lista_tarefas[82])]
conc = df[(df['task']==lista_tarefas[83])]
globals()['audiencia'+str(i)] = len(aij) + len(aud) + len(conc)
audi = globals()['audiencia'+str(i)]
globals()['bcsulp'+str(i)] = [f_i, f_e, v_i, v_e, post, cop, cop_e, audi]
def func2(): [...]
def func3(): [...]
def func4(): [...]
func1()
fucn2()
The following error comes up:
Traceback (most recent call last):
File "Relatorio_Filtro.py", line 736
func2()
File "Relatorio_Filtro.py", line 682
df = gp.get_group(list_sol[(i-1)])
File "C:..."
inds = self._get_index(name)
File "C:..."
return self.indices[name]
KeyError: 'WORKER1'
Question:
Am I missing something so that the excel dataframe imported at the beginnig is carried out throughout the program?

Categories