Why does my application close immediately after launching - python

I am a beginner to python language. I have two questions.
When I run auto-py-to-exe my application does not work correctly.
Up to line 8 of my code I want user to select the .csv file to upload to the py program. After running the auto-py-to-exe, the application closes and does not let the user select the file to upload.
What am I missing in my code to have the application stay open until the user selects the required file, then run the rest of the code?
when running this py.file in pycharm it works as it should.
This next question is un-related t this topic but...
How to i pass a filename dynamically?
In line 33 of my code I will be writing a specially formatted .csv as result.csv to a local dir. How do I pass the variable "filename" as a filename in the path statement in line 33.
df.to_csv('C:\WriteFilesHere\ result.csv')
When I run the program it always overwrites "result.csv" with the new file selected in line 8 of my code. I want a new file create with a different name every time I run the program.
from tkinter import *
import pandas as pd
import csv
from tkinter.filedialog import askopenfilename
Tk().withdraw()
filename = askopenfilename()
print(filename)
with open('preliminary.csv', 'w') as output:
with open(filename) as csv_file:
output_data = csv.writer(output, delimiter=",")
data = csv.reader(csv_file, delimiter=',')
line_count = 0
for row in data:
if line_count == 0:
output_data.writerow(row)
line_count += 1
else:
output_data.writerow(row)
line_count += 1
with open("preliminary.csv") as readfile:
reader1 = csv.reader(readfile)
read =[]
for row in reader1:
if len(row) !=0:
read = read +[row]
readfile.close()
pd.set_option('display.max_rows',4000)
df = pd.DataFrame(read)
print(df)
df.to_csv('C:\WriteFilesHere\ result.csv')

About the exe, you can declare a variable and add at the end of your program:
DonotClose=input("press close to exit")
It will pause the program, you can use any variable you like.

Related

Appending the content of old csv into new csv

I created a function and the idea is that if the "main" file (transactions_ledger.csv) doesn't exist I need to create it and append the new file (user input- which file) to it.
The code is working BUT the new file has an additional row in between with no contents.
note: I can not use pandas
In advance thank you for helping me.
Here is my code and the output:
from csv import writer
import os
from operator import itemgetter
# Create the ImportFunction
def ImportFunction():
# First step is to ask the user what file they would like to import
which_file = input("Which file would you like to import?")
#Second step is to load the file into python. For this I will use the With open statement in a try except block so the program doesnt crash if the file doesnt exist
try:
# i need to preform all actions!
with open(which_file,'r') as file:
user_file = file.read()
print(user_file)
except:
print("Sorry the user file can't be found. Please try again.")
#Third step is to open the transaction_ledger.csv. If the file doesnt exist create it if it does then preform actions
try:
#Have to do this using append method
with open('transaction_ledger.csv','r') as file:
file_content_transaction_ledger = file.read()
print(file_content_transaction_ledger)
except:
#Open user file and read it to be able to append to the new empty file transaction_ledger.csv
with open(which_file,'r') as old_file:
reader_obj = csv.reader(old_file) #read the current csv file
with open('transaction_ledger.csv', 'w') as new_file:
writer_obj = csv.writer(new_file, delimiter=",")
for data in reader_obj:
#loop through the read data and write each row in transaction_Ledger.csv
writer_obj.writerow(data)
print("New file created and filled in with old file data")
ImportFunction()
with open('transaction_ledger.csv','r') as file:
file_content_transaction_ledger = file.read()
print(file_content_transaction_ledger) ```
**Here is the output:**
Which file would you like to import? transactions_q1.csv
9547632,Arasaka,3/1/2022,6500,PENDING
1584037,Militech,3/15/2022,3000,COMPLETE
9433817,Arasaka,4/1/2022,450,COMPLETE
9462158,Arasaka,4/29/2022,900,PENDING
New file created and filled in with old file data
9547632,Arasaka,3/1/2022,6500,PENDING
1584037,Militech,3/15/2022,3000,COMPLETE
9433817,Arasaka,4/1/2022,450,COMPLETE
9462158,Arasaka,4/29/2022,900,PENDING
I found a solution. I need to specify inside the writer_obj = csv.writer(new_file,delimiter=",") the following:
lineterminator = "\n"

How can I create a new text file in python

I made this program it takes user input and outputs it in a new text file.
output = input('Insert your text')
f = open("text.txt", "a")
f.write(output)
This code will take a users input and prints it in a new text file. But if the file already exists in the path, the python code will just append to the file. I want the code to create a new file in the path every time the program is run. So the first time the code is run it will be displayed as text.txt, and the second time it runs it should output a new file called text(1).txt and so on.
Start by checking if test.txt exists. If it does, with a loop, check for test(n).txt, with n being some positive integer, starting at 1.
from os.path import isfile
output = input('Insert your text')
newFileName = "text.txt"
i = 1
while isfile(newFileName):
newFileName = "text({}).txt".format(i)
i += 1
f = open(newFileName, "w")
f.write(output)
f.close()
Eventually, the loop will reach some n, for which the filename test(n).txt doesn't exist and will save the file with that name.
Check if the file you are trying to create already exists. If yes, then change the file name, else write text to the file.
import os
output = input('Insert your text ')
filename = 'text.txt'
i = 1
while os.path.exists(filename):
filename = 'text ('+str(i)+').txt'
i += 1
f = open(filename, "a")
f.write(output)
Check if file already exists
import os.path
os.path.exists('filename-here.txt')
If file exists then create file with another filename (eg - appending the filename with date & time or any number etc)
A problem with checking for existence is that there can be a race condition if two processes try to create the same file:
process 1: does file exist? (no)
process 2: does file exist? (no)
process 2: create file for writing ('w', which truncates if it exists)
process 2: write file.
process 2: close file.
process 1: create same file for writing ('w', which truncates process 2's file).
A way around this is mode 'x' (open for exclusive creation, failing if the file already exists), but in the scenario above that would just make process 1 get an error instead of truncating process 2's file.
To open the file with an incrementing filename as the OP described, this can be used:
import os
def unique_open(filename):
# "name" contains everything up to the extension.
# "ext" contains the last dot(.) and extension, if any
name,ext = os.path.splitext(filename)
n = 0
while True:
try:
return open(filename,'x')
except FileExistsError:
n += 1
# build new filename with incrementing number
filename = f'{name}({n}){ext}'
file = unique_open('test.txt')
file.write('content')
file.close()
To make the function work with a context manager ("with" statement), a contextlib.contextmanager can be used to decorate the function and provide automatic .close() of the file:
import os
import contextlib
#contextlib.contextmanager
def unique_open(filename):
n = 0
name,ext = os.path.splitext(filename)
try:
while True:
try:
file = open(filename,'x')
except FileExistsError:
n += 1
filename = f'{name}({n}){ext}'
else:
print(f'opened {filename}') # for debugging
yield file # value of with's "as".
break # open succeeded, so exit while
finally:
file.close() # cleanup when with block exits
with unique_open('test.txt') as f:
f.write('content')
Demo:
C:\>test.py
opened test.txt
C:\>test
opened test(1).txt
C:\>test
opened test(2).txt

csv to txt python conversion script leaves file with 0 bytes

I have a SSIS Loop package that is calling a python script multiple times.
The intent:
There is a folder of csv files. I need them converted to pipe-delimited text files. Some of the files have bad rows in them. The python script converts the csv files into the pipe files while removing the bad records.
the python code:
import csv
import sys
if len(sys.argv) != 4:
print(sys.argv)
sys.exit("usage: python csvtopipe.py <<SOURCE.csv>> <<TARGET.txt>> <<number of columns>>")
source = sys.argv[1]
target = sys.argv[2]
colcount = sys.argv[3]
file_comma = open(source, "r", encoding="unicode_escape")
reader_comma = csv.reader(file_comma, delimiter=',')
file_pipe = open(target, 'w', encoding="utf-8")
writer_pipe = csv.writer(file_pipe, delimiter='|', lineterminator='\n')
for row in reader_comma:
if len(row) == int(colcount):
print("write this..")
writer_pipe.writerow(row)
file_pipe.close()
file_comma.close()
The SSIS Package:
The python call from SSIS:
python csvtopipe.py <<SOURCE.csv>> <<TARGET.txt>> <<number of columns>>
The problem.
The loop works correctly, but when the individual call finishes, the file re-writes to 0 bytes. I can't tell if it's a SSIS problem or a python problem.
THanks!
UPDATE 1
This is the original version of the code. same result:
import csv
import sys
if len(sys.argv) != 4:
print(sys.argv)
sys.exit("usage: python csvtopipe.py <<SOURCE.csv>> <<TARGET.txt>> <<number of columns>>")
source = sys.argv[1]
target = sys.argv[2]
colcount = sys.argv[3]
with open(source, "r", encoding="unicode_escape") as file_comma:
reader_comma = csv.reader(file_comma, delimiter=',')
with open(target, 'w', encoding="utf-8") as file_pipe:
writer_pipe = csv.writer(file_pipe, delimiter='|', lineterminator='\n')
for row in reader_comma:
if len(row) == int(colcount):
print("write")
writer_pipe.writerow(row)
Firstly I would switch to using with open()... rather then separate open() and close() functions. This will help to ensure that the file is automatically closed in the event of a problem.
As the script is being invoked multiple times, I would add a timestamp to your output filename. This would help to ensure that each time it is run, a different file is produced.
Lastly, you could add a test to ensure that only one copy of the script is executed at the same time. For Windows based applications this can be done using a Windows Mutex. On Linux, the use of a file lock can be used. This approach is sometimes referred to as the singleton pattern.
import win32event
import win32api
from winerror import ERROR_ALREADY_EXISTS
from datetime import datetime
import csv
import sys
import os
import time
if len(sys.argv) != 4:
print(sys.argv)
sys.exit("usage: python csvtopipe.py <<SOURCE.csv>> <<TARGET.txt>> <<number of columns>>")
# Wait up to 30 seconds for another copy of the script to stop running
windows_mutex = win32event.CreateMutex(None, False, 'CSV2PIPE')
win32event.WaitForSingleObject(windows_mutex, 30000)
source = sys.argv[1]
target = sys.argv[2]
colcount = sys.argv[3]
# Add a filestamp
path, ext = os.path.splitext(target)
timestamp = datetime.now().strftime("%Y_%m_%d %H%M_%S")
target = f'{path}_{timestamp}{ext}'
with open(source, "r", encoding="unicode_escape") as file_comma, \
open(target, 'w', encoding="utf-8") as file_pipe:
reader_comma = csv.reader(file_comma, delimiter=',')
writer_pipe = csv.writer(file_pipe, delimiter='|', lineterminator='\n')
for row in reader_comma:
if len(row) == int(colcount):
print("write this..")
writer_pipe.writerow(row)

Skip header when writing to an open CSV

I am compiling a load of CSVs into one. The first CSV contains the headers, which I am opening in write mode (maincsv). I am then making a list of all the others which live in a different folder and attempting to append them to the main one.
It works, however it just writes over the headings. I just want to start appending from line 2. I'm sure it's pretty simple but all the next(), etc. things I try just throw errors. The headings and data are aligned if that helps.
import os, csv
maincsv = open(r"C:\Data\OSdata\codepo_gb\CodepointUK.csv", 'w', newline='')
maincsvwriter = csv.writer(maincsv)
curdir = os.chdir(r"C:\Data\OSdata\codepo_gb\Data\CSV")
csvlist = os.listdir()
csvfiles = []
for file in csvlist:
path = os.path.abspath(file)
csvfiles.append(path)
for incsv in csvfiles:
opencsv = open(incsv)
csvreader = csv.reader(opencsv)
for row in csvreader:
maincsvwriter.writerow(row)
maincsv.close()
To simplify things I have the code load all the files in the directory the python code is run in. This will get the first line of the first .csv file and use it as the header.
import os
count=0
collection=open('collection.csv', 'a')
files=[f for f in os.listdir('.') if os.path.isfile(f)]
for f in files:
if ('.csv' in f):
solecsv=open(f,'r')
if count==0:
# assuming header is 1 line
header=solecsv.readline()
collection.write(header)
for x in solecsv:
if not (header in x):
collection.write(x)
collection.close()

Exit part of a script in Spyder

I am working in a simple task of appending and adding an extra column to multiple CSV files.
The following code works perfectly in Python prompt shell:
import csv
import glob
import os
data_path = "C:/Users/mmorenozam/Documents/Python Scripts/peptidetestivory/"
outfile_path = "C:/Users/mmorenozam/Documents/Python Scripts/peptidetestivory/alldata.csv"
filewriter = csv.writer(open(outfile_path,'wb'))
file_counter = 0
for input_file in glob.glob(os.path.join(data_path,'*.csv')):
with open(input_file,'rU') as csv_file:
filereader = csv.reader(csv_file)
name,ext = os.path.splitext(input_file)
ident = name[-29:-17]
for i, row in enumerate(filereader):
row.append(ident)
filewriter.writerow(row)
file_counter += 1
However, when I run this code using Spyder, in order to have the desired .csv file, I have to add
exit()
or type in the IPython console "%reset".
Is there a better way to finish this part of the script? because the following parts of my code work with the .csv file generated in this part, and using the options above is annoying

Categories