How to write txt files inside a infinite loop? - python

I am trying to make a loop that writes in a text file the date each time en event happens. but i can't get it to work since i need an infinite loop to run the program. If i put myfile.close() inside the loop even inside the "if x[14]=="track":" i get:
myfile.write(wri)
ValueError: I/O operation on closed file.
However if i place it outside the loop the file Doesn't close and nothing is written in the output file.
Here is the code
while 1 :
print("yes")
response = requests.get('https://api.spotify.com/v1/me/player/currently-playing', headers=headers)
soup2 = BeautifulSoup(response.text, "html.parser")
x=re.findall('"([^"]*)"', str(soup2))
if isinstance(x, list)==True:
if len(x)>=15:
print(x[14])
if x[14]=="track":
os.system("TASKKILL /IM spotify.exe")
sleep(2)
subprocess.Popen("C:/Users/nebbu/AppData/Roaming/Spotify/Spotify.exe")
sleep(2)
import pyautogui
pyautogui.press("playpause")
pyautogui.press("l")
print(x)
wri=str(date)+"- -"+str(x[13]+": "+str(x[14]))
myfile.write(wri)
myfile.close()
The loop never ends, i don't know if it has to end to close the file or if there is another way of doing it.

Simply make a custom function and call it for every time you want to add a new line to your text file. For example:
def f(dump):
file = open('myfile.txt', 'a')
file.write(dump)
file.write('\n')
file.close()
and then pass it the values you want to write on the fly.

Related

Loop Not Functioning well

Guys i'v got a little problem with my code. The code is supposed to check a list of numbers and group them in a text file provider based but doesn't work as expected. It only saved a single number in a file for each provider instead of multiple ones. This is my code , if anyone could help i'd be grateful.Sorry if my code is too traditional
def main():
dead = open('invalid_no.txt', 'a+')
print('-------------------------------------------------------')
print('-------------------------------------------------------')
list = input('Your Phone Numbers List : ')
base_url = "http://apilayer.net/api/validate"
params = {
'access_key': '3246123d1d67e385b1d9fa11d0e84959',
'number': '',
}
numero = open(list, 'r')
for num in numero:
num = num.strip()
if num:
lines = num.split(':')
params['number'] = lines[0]
response = requests.get(base_url, params=params)
print('status:', response.status_code)
print('-------------------------------------')
try:
resp = response.json()
print('number:', resp['valid'])
print('number:', resp['international_format'])
print('country:', resp['country_name'])
print('location:',resp['carrier'])
print('-------------------------------------')
mok = open(resp['carrier'],'w+')
if resp['carrier'] == mok.name:
mok.write(num +'\n')
except FileNotFoundError:
if resp['carrier'] == '':
print('skipping')
else:
mok = open(resp['carrier'],'w+')
if resp['carrier'] == mok.name:
mok.write(num)
else:
print('No')
if __name__ == '__main__': main()
Opening a file with mode "w" will erase the existing file and start with an empty new one. That is why you are getting only one number. Every time you write to the file, you overwrite whatever was there before. There is no mode "w+". I believe that ought to cause a ValueError: invalid mode: 'w+', but in fact it seems to do the same as "w". The fact that "r+" exists doesn't mean you can infer that there is also an undocumented "w+".
From the documentation for open():
The second argument is another string containing a few characters
describing the way in which the file will be used. mode can be 'r'
when the file will only be read, 'w' for only writing (an existing
file with the same name will be erased), and 'a' opens the file for
appending; any data written to the file is automatically added to the
end. 'r+' opens the file for both reading and writing. The mode
argument is optional; 'r' will be assumed if it’s omitted.
So, no "w+".
I think you want mode "a" for append. But if you do that, the first time your code tries to write to the file, it won't be there to append to, so you get the file not found error that you had a problem with.
Before writing to the file, check to see if it is there. If not, open it for writing, otherwise open it for appending.
if os.path.exists(resp['carrier']):
mok = open(resp['carrier'],'a')
else:
mok = open(resp['carrier'],'w')
or, if you have a taste for one-liners,
mok = open(resp['carrier'],'a' if os.path.exists(resp['carrier']) else 'w')
Also your code never calls close() on the file after it is finished writing to it. It should. Forgetting it can result in missing data or other baffling behaviour.
The best way not to forget it is to use a context manager:
with open(resp['carrier'],'a' if os.path.exists(resp['carrier']) else 'w') as mok:
# writes within the with-block here
# rest of program here
# after the with-block ends, the context manager closes the file for you.

Python write operation not writing quickly enough

I don't know why this started happening recently. I have a function that opens a new text file, writes a url to it, then closes it, but it is not made immediately after the f.close() is executed. The problem is that a function after it open_url() needs to read from that text file a url, but since nothing is there, my program errors out.
Ironically, after my program errors out and I stop it, the url.txt file is made haha. Anyone know why this is happening with the python .write() action? Is there another way to create a text file and write a line of text to that text file faster?
#staticmethod
def write_url():
if not path.exists('url.txt'):
url = UrlObj().url
print(url)
with open('url.txt', 'w') as f:
f.write(url)
f.close
else:
pass
#staticmethod
def open_url():
x = open('url.txt', 'r')
y = x.read()
return y
def main():
scraper = Job()
scraper.write_url()
url = scraper.open_url()
results = scraper.load_craigslist_url(url)
scraper.kill()
dictionary_of_listings = scraper.organizeResults(results)
scraper.to_csv(dictionary_of_listings)
if __name__ == '__main__':
main()
scheduler = BlockingScheduler()
scheduler.add_job(main, 'interval', hours=1)
scheduler.start()
There is another class called url that prompts the user to add attributes to a bare url for seleenium to use. UrlObj().url gives you the url to write which is used to write to the new text file. If the url.txt file already exists, then pass and go to open_url()and get the url from the url.txt file to pass to the url variable which is used to start the scraping.
Just found a work around. If the file does not exist then return the url to be fed directly to load_craigslist_url. If the text file exists then just read from the text file.

python endless loop stops writing to file

I am downloading dynamic data from api server and writing it to file by means of an endless loop in python. For whatever reason the program stops writing to file after couple thousand lines, while the program seems to be running. I am not sure where the problem is. The program does not give error, ie. it is not that the API server is refusing response. When restarted it continues as planned. I am using python 3.6 and win10.
Simplified version of the code looks something like:
import requests, json, time
while True:
try:
r = requests.get('https://someapiserver.com/data/')
line = r.json()
with open('file.txt', 'a') as f:
f.write(line)
time.sleep(5)
except:
print('error')
time.sleep(10)
Try opening the file first and keeping the lock on it like so:
import requests, json, time
f = open('file.txt', 'a')
while True:
try:
r = requests.get('https://someapiserver.com/data/')
line = r.json()
f.write(line)
f.flush()
time.sleep(5)
except:
print('error')
time.sleep(10)
f.close() # remember to close the file
The solution is ugly but it will do.

Python read from a file, and only do work if a string isn't found

So I'm trying to make a reddit bot that will exec code from a submission. I have my own sub for controlling these clients.
while __name__ == '__main__':
string = open('config.txt').read()
for submission in subreddit.get_new(limit = 1):
if submission.url not in string:
f.write(submission.url + "\n")
f.close()
f = open('config.txt', "a")
string = open('config.txt').read()
So what this is suppose to do is read from the config file, then only do work if the submission url isn't in config.txt. However, it always sees the most recent post and does it's work. This is how F is opened.
if not os.path.exists('file'):
open('config.txt', 'w').close()
f = open('config.txt', "a")
First a critique of your existing code (in comments):
# the next two lines are not needed; open('config.txt', "a")
# will create the file if it doesn't exist.
if not os.path.exists('file'):
open('config.txt', 'w').close()
f = open('config.txt', "a")
# this is an unusual condition which will confuse readers
while __name__ == '__main__':
# the next line will open a file handle and never explicitly close it
# (it will probably get closed automatically when it goes out of scope,
# but it's not good form)
string = open('config.txt').read()
for submission in subreddit.get_new(limit = 1):
# the next line should check for a full-line match; as written, it
# will match "http://www.test.com" if "http://www.test.com/level2"
# is in config.txt
if submission.url not in string:
f.write(submission.url + "\n")
# the next two lines could be replaced with f.flush()
f.close()
f = open('config.txt', "a")
# this is a cumbersome way to keep your string synced with the file,
# and it never explicitly releases the new file handle
string = open('config.txt').read()
# If subreddit.get_new() doesn't return any results, this will act as
# a busy loop, repeatedly requesting new results as fast as possible.
# If that is undesirable, you might want to sleep here.
# file handle f should get closed after the loop
None of the problems pointed out above should keep your code from working (except maybe the imprecise matching). But simpler code may be easier to debug. Here's some code that does the same thing. Note: I assume there is no chance any other process is writing to config.txt at the same time. You could try this code (or your code) with pdb, line-by-line, to see whether it works as expected.
import time
import praw
r = praw.Reddit(...)
subreddit = r.get_subreddit(...)
if __name__ == '__main__':
# open config.txt for reading and writing without truncating.
# moves pointer to end of file; closes file at end of block
with open('config.txt', "a+") as f:
# move pointer to start of file
f.seek(0)
# make a list of existing lines; also move pointer to end of file
lines = set(f.read().splitlines())
while True:
got_one = False
for submission in subreddit.get_new(limit=1):
got_one = True
if submission.url not in lines:
lines.add(submission.url)
f.write(submission.url + "\n")
# write data to disk immediately
f.flush()
...
if not got_one:
# wait a little while before trying again
time.sleep(10)

How to structure Python function so that it continues after error?

I am new to Python, and with some really great assistance from StackOverflow, I've written a program that:
1) Looks in a given directory, and for each file in that directory:
2) Runs a HTML-cleaning program, which:
Opens each file with BeautifulSoup
Removes blacklisted tags & content
Prettifies the remaining content
Runs Bleach to remove all non-whitelisted tags & attributes
Saves out as a new file
It works very well, except when it hits a certain kind of file content that throws up a bunch of BeautifulSoup errors and aborts the whole thing. I want it to be robust against that, as I won't have control over what sort of content winds up in this directory.
So, my question is: How can I re-structure the program so that when it errors on one file within the directory, it reports that it was unable to process that file, and then continues to run through the remaining files?
Here is my code so far (with extraneous detail removed):
def clean_dir(directory):
os.chdir(directory)
for filename in os.listdir(directory):
clean_file(filename)
def clean_file(filename):
tag_black_list = ['iframe', 'script']
tag_white_list = ['p', 'div']
attr_white_list = {'*': ['title']}
with open(filename, 'r') as fhandle:
text = BeautifulSoup(fhandle)
text.encode("utf-8")
print "Opened "+ filename
# Step one, with BeautifulSoup: Remove tags in tag_black_list, destroy contents.
[s.decompose() for s in text(tag_black_list)]
pretty = (text.prettify())
print "Prettified"
# Step two, with Bleach: Remove tags and attributes not in whitelists, leave tag contents.
cleaned = bleach.clean(pretty, strip="TRUE", attributes=attr_white_list, tags=tag_white_list)
fout = open("../posts-cleaned/"+filename, "w")
fout.write(cleaned.encode("utf-8"))
fout.close()
print "Saved " + filename +" in /posts-cleaned"
print "Done"
clean_dir("../posts/")
I looking for any guidance on how to write this so that it will keep running after hitting a parsing/encoding/content/attribute/etc error within the clean_file function.
You can handle the Errors using :try-except-finally
You can do the error handling inside clean_file or in the for loop.
for filename in os.listdir(directory):
try:
clean_file(filename)
except:
print "Error processing file %s" % filename
If you know what exception gets raised you can use a more specific catch.

Categories