I am writing a program that will pull variables from a template and effectively Find/Replace into a template.
Example Template:
VARIABLES
#username
#password
#secret
###########################################################
My username is #username
Password is #password
Secret is #secret
The program will find each variable and ask one by one for user input, opening the file, saving the contents and then closing the file ready for the next variable.
All is working well besides a strange one. Once I have run the code, the end of my text file appears to go a little wild. See below output. As you can see it successfully took the variables and placed them, however it added "is TESTis TESTetis #secret" to the end?
VARIABLES
User
Pass
TEST
###########################################################
My username is User
Password is Pass
Secret is TESTis TESTis TESTetis #secret
I am new to Python (this week) so excuse the code below. I have made it work in my own special way! It may not be the most efficient. Just struggling to see where the extra is being added.
Code:
##COPY CONTENTS FROM READ ONLY TO NEW FILE
with open("TestTemplate.txt", "rt") as fin:
with open("out.txt", "wt") as fout:
for line in fin:
fout.write(line)
fin.seek(0)
fout.seek(0)
fin.close()
fout.close()
##PULL VARIABLES AND FIND/REPLACE CONTENTS
with open("out.txt", "rt") as fin:
with open("out.txt", "rt") as searchf:
with open("out.txt", "r+") as fout:
for line in fin:
if line.startswith("#"):
trimmedLine = line.rstrip()
## USER ENTRY
entry = input("Please Enter " + trimmedLine + ": ")
for line in searchf:
## ENSURE ONLY VARIABLES AFTER '#' ARE EDITED. KEEPS IT NEAT
if trimmedLine in line:
fout.write(line.replace(trimmedLine,entry))
else:
fout.write(line)
##RESET FOCUS TO THE TOP OF THE FILE READY FOR NEXT ITERATION
searchf.seek(0)
fout.seek(0)
Thanks in advance
Your replacement strings are shorter than the original template place holders, resulting in leftover characters after you perform file seek. You should truncate the file before you call seek() so that the extra character at the end can be trimmed.
##RESET FOCUS TO THE TOP OF THE FILE READY FOR NEXT ITERATION
searchf.seek(0)
fout.truncate()
fout.seek(0)
You are opening the same file (out.txt) in different modes at the same time - doesn't this strike you as evil? It is like having 3 people cooking in the same pan. One does eggs, one bacon, the third caramel: might work out - wouldnt like to taste it.
Cleaner code IPO-Model (yeah its old, but still valid):
Open file, read in content, close file.
Do your replacements.
Open output file, write replaced text, close file.
Shorter version of reading files:
with open("TestTemplate.txt", "rt") as fin,
open("out.txt", "wt") as fout:
text = fin.read() # read in text from template
fout.write(text) # you could simply use module os and copy() the file ...
# or simply skip copying here and use open("out.txt","w") below
Using a fixed text here - you could aquire it as above:
text = """VARIABLES
#username
#password
#secret
###########################################################
My username is #username
Password is #password
Secret is #secret"""
replaceMe = {} # dictionary to hold the to be replaced parts and its replacement
# go through all lines
for l in text.splitlines():
if l.startswith("#"): # if it starts with #, ask for content top replace it with
replaceMe[l.rstrip()] = input("Please Enter {}:".format(l.rstrip()))
newtext = text
# loop over all keys in dict, replace key in text
for k in replaceMe:
newtext = newtext.replace(k,replaceMe[k])
print(text)
print(newtext)
# save changes - using "w" so you can skip copying the file further up
with open("out.txt","w") as f:
f.write(text)
Output after replacement:
VARIABLES
a
b
c
###########################################################
My username is a
Password is b
Secret is c
Related
I am scraping from user channel id's they public information emails based on my keywords but some channels id's repeat and then also emails repeat too while scraping large amount of channel id's, so before I write them line by line to my text I need they check also for possible duplicate email and ignore if email already exist in text file.
Also I would be graceful if you write me there how to remove emptyspaces because I already have code which sometimes works other not works and somehow it writes the empty line with space.
My Code which writes line by line all emails:
with open("scraped_emails.txt", 'a') as f:
for email in cleanEmail:
f.write(email.replace(" ", "")+ '\n')
You can just add an if statement to check if the email you want to append is already in the file or not, by doing :
cleanEmail = ['a#b.com', ' glennbz#veriznon.net ', 'x#yy.ul']
with open("scraped_emails.txt", 'r+') as f:
emails = f.read()
for email in cleanEmail:
if email not in emails:
f.write(email.strip() + '\n')
Note that I added the strip() method, and this will solve your empty spaces problem by removing both leading and trailing white spaces.
# Output
a#b.com
glenjnnbz#veriznon.net
x#yy.ul
If i understand you correctly, you want to clean up your file scraped_emails.txt, remove duplicates and correct the emails by removing whitespace?
I would do two steps:
parse all your emails from scraped_emails.txt, strip the spaces and store them in a set (unique)
overwrite your existing file with the cleaned-up values. If you are unsure about this, write to a different file first and check the results
clean_emails = set()
file_name = "scraped_emails.txt"
# initial reading of emails
print(f"Reading {file_name} to clean emails ..")
initial_line_counter = 0
with open(file_name, "r") as f_in:
for line in f_in:
# remember input lines, just for statistics
initial_line_counter += 1
# strips newlines and whitespaces
cleaned_email = line.rstrip("\n").strip()
# you mentioned empty lines - this prevents adding of empty strings to your set
if cleaned_email:
clean_emails.add(cleaned_email)
# opening the file with the attribute mode="w" overwrites existing files
with open(file_name, "w") as f_out:
for email in clean_emails:
f_out.write(f"{email}\n")
print(f"Reduced {initial_line_counter} to {len(clean_emails)} cleaned email addresses")
You can test this with a scraped_emails.txt with the following content:
some_mail1#yahoo.com
some_mail2#yahoo.com
some_mail3#yahoo.com
some_mail4#yahoo.com
some_mail5#yahoo.com
some_mail6#yahoo.com
some_mail#y7ahoo.com
some_mail8#yahoo.com
some_mail9#yahoo.com
some_mail9#yahoo.com
some_mail9#yahoo.com
I found some projects on youtube, and this is one of them.
I'm trying with the password manager program. Here's the link: https://www.youtube.com/watch?v=DLn3jOsNRVE
And here's my code:
from cryptography.fernet import Fernet
'''
def write_key():
key = Fernet.generate_key()
with open("key.key", "wb") as key_file:
key_file.write(key)'''
def load_key():
file = open("key.key", "rb")
key = file.read()
file.close()
return key
key = load_key()
fer = Fernet(key)
def view():
with open('passwords.txt', 'r') as f:
for line in f.readlines():
data = line.rstrip()
user, passw = data.split("|")
print("User:", user, "| Password:",
fer.decrypt(passw.encode()).decode())
def add():
name = input('Account Name: ')
pwd = input("Password: ")
with open('passwords.txt', 'a') as f:
f.write(name + "|" + fer.encrypt(pwd.encode()).decode() + "\n")
while True:
mode = input(
"Would you like to add a new password or view existing ones (view, add), press q to quit? ").lower()
if mode == "q":
break
if mode == "view":
view()
elif mode == "add":
add()
else:
print("Invalid mode.")
continue
#this is a module that will alow you to encrypt txt-s
# pass is used as a placeholder for future code
# rstrip removes any traling chars
#split will look for the char in the arg and it will split the string there
#a append w write r read r+ read and write
#with w mode, you completely overwrite the file so be careful
#with a mode you can add smthing to the end
I followed the instructions precisely and I have no idea what could cause my problem. When I run it, I get an error message:
The guy even has his version of the code on github. I copy-pasted it and still doesn't work
On the video, does the key.key file generate itself, or not?
The only possible code that could write to the key.key file is both:
commented out; and
not called even if it were not commented out,
So, no, it's not correct to say that "the key.key file generate[s] itself".
Looking over the linked video, the presenter at some point (at 1:29:50, more precisely) had the code call that function to create the file, then removed that call and commented out that function.
Likely cause of your problem is that you made a mistake in the process somewhere(1), and this resulted in the file not being created (or being created somewhere other than where it's expected). Suggest you go back to that point in the video and re-do it.
Or, you could just create the key.key file, containing the content (taken from the video):
Raq7IMZ4QkqK20j7lKT3bxJTgwxeJFYx4ADjTqVKdQY=
That may get you going faster than revisiting the steps that the presenter took.
(1) Re your comment that "[t]he guy even has his version of the code on github", it may be that you thought you could bypass the video and just go straight to the final code. If so, that was a mistake, as the final code expects you to have run the incomplete code in order to generate the key file.
If so, I would consider that a failing of the presenter. It would have been far better to leave the keyfile-creating code in and call it, for example, when you ran the code with python the_code.py --make-key-file.
Whats the way to extract only lines with specific word only from requests (online text file) and write to a new text file? I am stuck here...
This is my code:
r = requests.get('http://website.com/file.txt'.format(x))
with open('data.txt', 'a') as f:
if 'word' in line:
f.write('\n')
f.writelines(str(r.text))
f.write('\n')
If I remove: if 'word' in line:, it works, but for all lines. So it's only copying all lines from one file to another.
Any idea how to give the correct command to extract (filter) only lines with specific word?
Update: This is working but If that word exist in the requests file, it start copying ALL lines, i need to copy only the line with 'SOME WORD'.
I have added this code:
for line in r.text.split('\n'):
if 'SOME WORD' in line:
*Thank you guys for all the answers and sorry If i didn't made myself clear.
Perhaps this will help.
Whenever you invoke POST/GET or whatever, always check the HTTP response code.
Now let's assume that the lines within the response text are delimited with newline ('\n') and that you want to write a new file (change the mode to 'a' if you want to append). Then:
import requests
(r := requests.get('SOME URL')).raise_for_status()
with open('SOME FILENAME', 'w') as outfile:
for line in r.text.split('\n'):
if 'SOME WORD' in line:
print(line, file=outfile)
break
Note:
You will need Python 3.8+ in order to take advantage of the walrus operator in this code
I would suggest you these steps for properly handling the file:
Step1:Streamline the download file to a temporary file
Step2:Read lines from the temporary file
Step3:Generate main file based on your filter
Step4:Delete the temporary file
Below is the code that does the following steps:
import requests
import os
def read_lines(file_name):
with open(file_name,'r') as fp:
for line in fp:
yield line
if __name__=="__main__":
word='ipsum'
temp_file='temp_file.txt'
main_file='main_file.txt'
url = 'https://filesamples.com/samples/document/txt/sample3.txt'
with open (temp_file,'wb') as out_file:
content = requests.get(url, stream=True).content
out_file.write(content)
with open(main_file,'w') as mf:
out=filter(lambda x: word in x,read_lines(temp_file))
for i in out:
mf.write(i)
os.remove(temp_file)
Well , there is missing line you have to put in order to check with if statement.
import requests
r = requests.get('http://website.com/file.txt').text
with open('data.txt', 'a') as f:
for line in r.splitlines(): #this is your loop where you get a hold of line.
if 'word' in line: #so that you can check your 'word'
f.write(line) # write your line contains your word
hey I've wrote this code in python and it iterates through the selected text file and reads it. My objective is to read the file, and then write the file on a new file and replace the word "winter" with nothing. or rather delete the word from the second revised file. I have two txt files called odetoseasons and odetoseasons_censored the contents of these two files are identical before the program starts. which is
I love winter
I love spring
Summer, Fall and winter again.
/This is the python file named readwrite.py WHen i run the program it keeps the contents in odetoseasons but somehow deletes the contents of odetoseasons_censored.txt not sure why/
# readwrite.py
# Demonstrates reading from a text file and writing to the other
filename = input("Enter file name (without extension): ")
fil1 = filename+".txt"
fil2 = filename+"_censored.txt"
bad_word = ['winter']
print("\nLooping through the file, line by line.")
in_text_file = open(fil1, "r")
out_text_file = open(fil2,"w")
for line in in_text_file:
print(line)
out_text_file.write(line)
in_text_file.close()
out_text_file.close()
out_text_file = open(fil2,"w")
for line in fil2 :
if "winter" in line:
out_text_file.write(line)
line.replace("winter", "")
Actually there are two errors in your code. Firstly the function a.replace() returns an object with the replaced word and not alter the original object. Secondly you are trying to read a file you have opened in 'w' mode, which is not possible. If you need to both read and write you should use 'r+' mode.
Here is the correct code(and more compact one) that you can use :-
filename = input("Enter file name (without extension): ")
fil1 = filename+".txt"
fil2 = filename+"_censored.txt"
bad_word = ['winter']
print("\nLooping through the file, line by line.")
in_text_file = open(fil1, "r")
out_text_file = open(fil2,"w")
for line in in_text_file:
print(line)
line_censored = line.replace("winter","")
print(line_censored)
out_text_file.write(line_censored)
in_text_file.close()
out_text_file.close()
Initial users.csv file- columns are respectively username,real name,password.
fraud,mike ross,iloveharveynew
abc,ab isss c,coolgal
xyz,name last,rockpassnew
Algorithm-
1. Input username (from a cookie) & new-password from a html form.
2. Iterate over the csv file to print all the rows that do not contain 'username' to a new file final.csv
3. Remove users.csv file.
4. Append username,real name,new password to final.csv file.
5. Rename final.csv to users.csv
For instance, let's say user xyz was logged in and username=xyz was retrieved from cookie. The user changed the password to rockpassnewnew.
Output users.csv file-
fraud,mike ross,iloveharveynew
abc,ab isss c,coolgal
xyz,name last,rockpassnewnew
Here is the functioned defined that does this which is called from a controller-
def change(self, new_password):
errors = []
if len(new_password) < 3: errors.append('new password too short')
if errors:
return errors
else:
with open('users.csv','r') as u:
users = csv.reader(u)
with open('final.csv', 'a') as f:
final=csv.writer(f)
for line in users:
variableforchecking1 = bottle.request.get_cookie('username')
if variableforchecking1 not in line:
final.writerow(line)
os.remove('users.csv')
variableforchecking1 = bottle.request.get_cookie('username')
variableforchecking2 = bottle.request.get_cookie('real_name')
with open('final.csv', 'a') as f:
final=csv.writer(f)
final.writerow([variableforchecking1, variableforchecking2, new_password])
os.rename ('final.csv','users.csv')
return []
The controller code which calls this function is-
#bottle.get('/change')
def change():
return bottle.template('change')
#bottle.post('/change')
def changePost():
new_password = bottle.request.forms.get('new-password')
username = me.username()
errors = me.change(new_password)
if errors:
return bottle.template('change', errors=errors)
me.login(username, new_password)
return bottle.redirect('/home')
How to prevent these blank rows from being created because every time a password is changed, the number of blank rows increase considerably?
When opening a CSV file to be written to using a csv.writer, take care how you open the file.
The problem is that csv.writer does its own handling of line-endings. If a file opened with open is not opened carefully, the file object will also replace LF line-endings with CR+LF when writing data. So when both are making these changes, the line endings in the output file can become CR+CR+LF. Text editors will often interpret this as two line endings.
The fix is to open the file in binary mode in Python 2, or with newline='' in Python 3 as recommended by the documentation for the csv module. To do this, replace both occurrences of
with open('final.csv', 'a') as f:
with
with open('final.csv', 'ab') as f:
if you are using Python 2, or
with open('final.csv', 'a', newline='') as f:
if you are using Python 3.