Python Script Batch File - python

Struggling with what is I am sure a very straight forward problem. I have a scheduled task set up which launches a batch file, which in turn runs a Python script. All is well, however I cannot seem to close the Python shell once the script is finished. The result is lots of open windows.
If this is a Python issue, I have read the best way to close is to do the following:
import selenium
import json
import time
import datetime
import sys
from selenium import webdriver
from datetime import timedelta
today = datetime.datetime.today()
yesterday = today - timedelta(days=1)
yesterday = yesterday.strftime("%d.%m.%Y")
browser = webdriver.Chrome(executable_path = 'c:/xampp/htdocs/portal/functions/timon/chromedriver.exe')
browser.get('http://adventures.timon.is')
time.sleep(2)
browser.find_element_by_id('tbxNumerstarfsmanns').clear()
browser.find_element_by_id('tbxNumerstarfsmanns').send_keys('user')
browser.find_element_by_id('tbxUserLykilord').clear()
browser.find_element_by_id('tbxUserLykilord').send_keys('pass')
time.sleep(2)
browser.find_element_by_css_selector('input[type=\"submit\"]').click()
browser.find_element_by_css_selector("a[href*=reports]").click()
browser.find_element_by_link_text("Salary administrators").click()
browser.find_element_by_link_text("Punch-in report").click()
time.sleep(2)
browser.find_element_by_id('id_fromdate').clear()
browser.find_element_by_id('id_fromdate').send_keys(yesterday)
browser.find_element_by_id('id_todate').clear()
browser.find_element_by_id('id_todate').send_keys(yesterday)
time.sleep(2)
browser.find_element_by_css_selector("input[type=submit]").click()
time.sleep(2)
results = browser.find_elements_by_css_selector("table#resultstable td")
columns = [val.text for val in results]
data = json.dumps(columns)
text_file = open("c:/xampp/htdocs/portal/functions/timon/info.txt", "w")
text_file.write(data)
text_file.close()
browser.close()
sys.exit()
However this does not work.
Batch file looks like this...
start "extractTimon" "C:\xampp\Python36-32\python.exe" C:\xampp\htdocs\portal\functions\timon\extractTimon.py
If anyone could point me in the right direction, I'd really appreciate it.

Related

How to exit and then automatically restart code

I have some python code which scrapes a website and reports the live price of a specific crypto. When I use a while loop to keep printing the live price it keeps printing the same price over and over even when the live price on the website has changed. I thought that maybe my code was scraping it and coming to that website too fast so I added a delay using the time module but even after a 1 minute delay it will not display the correct price but instead prints the same price over and over. Manually ending and restarting the code seemed to make this bug go away but I want this program to run 24/7 and email me when a price reaches a certain point. This is my code so far: (BTW I am a beginner)
import requests
import bs4
import time
run = True
while run == True:
# time.sleep(60)
res = requests.get("https://coinmarketcap.com/currencies/gitcoin/")
soup_obj = bs4.BeautifulSoup(res.text, "lxml")
item = soup_obj.select(".priceValue___11gHJ")[0]
item = item.text
print(item)
exit()
This has a loop but I have added an exit() function so that it ends and so I can manually restart it. I just need a way for this code to automatically end itself and then restart repeatedly. I am also using the community edition of Pycharm (latest edition).
You can write your program to call a subprocess instead of doing the web call itself. That subprocess can call requests, return whatever you want via stdout and exit. There are multiple ways to do this. You could write separate scripts or use multiprocess.Process, but in this example I've written a script that calls itself and uses command line parameters to know which role it is playing.
import sys
if len(sys.argv) == 1:
# run poller as subprocess so it exits
import time
import subprocess as subp
while True:
result = subp.run([sys.executable, __file__, "called"], capture_output=True)
# assuming program returns ascii float in single line
item = result.stdout.decode("ascii").strip()
print(item)
time.sleep(60)
else:
import requests
import bs4
res = requests.get("https://coinmarketcap.com/currencies/gitcoin/")
soup_obj = bs4.BeautifulSoup(res.text, "lxml")
item = soup_obj.select(".priceValue___11gHJ")[0]
item = item.text
sys.stdout.write(item)

Regular file download with Python scheduler and wget

I wrote a simple script which schedules the download of the file from web page once per every week with schedule module. Before downloading, it checks if the file was updated using BeautifulSoup. If yes, it downloads the file using wget. Further, other script uses the file to perform calculations.
The problem is that file won’t appear in the directory until I manually interrupt the script. So, each time I must interrupt script and rerun it again, so it’ll be scheduled for the next week.
Is there any chance to download and save the file "on the fly" without script interruption?
The code will be:
import wget
import ssl
import schedule
import time
from bs4 import BeautifulSoup
import datefinder
from datetime import datetime
# disable certificate checks
ssl._create_default_https_context = ssl._create_unverified_context
#checking if file was updated, if yes, download file, if not waiting until updated
def download_file():
if check_for_updates():
print("downloading")
url = 'https://fgisonline.ams.usda.gov/ExportGrainReport/CY2020.csv'
wget.download(url)
print("downloading complete")
else:
print("sleeping")
time.sleep(60)
download_file()
# Checking if website was updated
def check_for_updates():
url2 = 'https://fgisonline.ams.usda.gov/ExportGrainReport/default.aspx'
html = urlopen(url2).read()
soup = BeautifulSoup(html, "lxml")
text_to_search = soup.body.ul.li.string
matches = list(datefinder.find_dates(text_to_search[30:]))
found_date = matches[0].date()
today = datetime.today().date()
return found_date == today
schedule.every().tuesday.at('09:44').do(download_file)
while True:
schedule.run_pending()
time.sleep(1)
You need to specify the output directory. I think that unless doing this, PyCharm saves in temp directory somewhere, and when you stop the script PyCharm copy it.
Change to:
wget.download(url, out=output_directory)
Based on the following clue you should be able to solve your issue:
from bs4 import BeautifulSoup
import requests
import urllib3
urllib3.disable_warnings()
def main(url):
r = requests.head(url, verify=False)
print(r.headers['Last-Modified'])
main("https://fgisonline.ams.usda.gov/ExportGrainReport/CY2020.csv")
Output:
Mon, 28 Sep 2020 15:02:22 GMT
Now you can run your script via Cron job daily at the time which you prefer and looping over the file headers Last-Modified until it becomes equal to today's date and then download the file.
Be informed I used head request which will be 100x speedy to track it. and then you can use requests.get
I prefer to work under the same session as well

Save live data from data logger to csv file by python

I have a data logger to record the temperature. I want to save these data and epoch time in a csv file. I tried the following code, there is no error reporting but the csv file is empty. Can anyone help me to figure out the problem?
import board
import busio
import adafruit_mcp9600
import time
i2c = busio.I2C(board.SCL,board.SDA,frequency = 100000)
mcp = adafruit_mcp9600.MCP9600(i2c, 0x60, tctype = "J")
with open ("/home/pi/Documents/test.csv", "a") as log:
while True:
temp = mcp.temperature
temptime = time.time()
log.write("{0},{1}\n".format(str(temptime),str(temp)))
time.sleep(1)
Assuming those libraries you're importing are working correctly, I think this is because the writer is not flushing the buffer, so it appears like nothing is being written.
The solution would be to flush with log.flush() after each time you write a log.
Try a simpler example:
A)
import time
def go():
i = 0
with open("/home/some/dir/test.csv", "a") as nice:
while True:
nice.write(f"hello,{i},{time.time()}\n")
i += 1
time.sleep(5)
if __name__ == "__main__":
go()
versus
B)
import time
def go():
i = 0
while True:
with open("/home/some/dir/test.csv", "a") as nice:
nice.write(f"hello,{i},{time.time()}\n")
i += 1
time.sleep(5)
if __name__ == "__main__":
go()
When I refresh the file in case A, new rows do not appear to be written. They are in case B, though.
If I modify case A) and add nice.flush() after each write, it fixes the issue.
The above two blocks are just to demonstrate what you're seeing. I'm not suggesting you do one or the other. Ultimately, I would not suggest doing anything like this, and I would instead use the logging package and configure a proper logger if you're indeed trying to create log files.

Reload a python script after an auto selenium error

Hi guys I have a hard time making a script.
I have a python script using selenium
I am doing automation on a site, the script needs to be running on that site for a long time.
The problem is that the site times out, the robot returns an error and stops executing.
I need that when this happens close all windows and reconnect to the site again
site timeout is = 30min
if anyone can help me it will help a lot!!!
from selenium import webdriver
import pyautogui
URL = 'https://XXXXXXX'
URL2 = 'https://XXXXXX'
user = 'user12345'
password = 'password12345'
class Web:
browser = webdriver.Ie(URL)
browser.find_element_by_name('login').send_keys(user)
browser.find_element_by_name('password').send_keys(password)
pyautogui.moveTo(121,134)# here I open a login window so I can use another link that I need to use
pyautogui.click(121,134)
browser.execute_script("window.open()")
browser.switch-to.frame(browser.window_handles[1])
browser.get(URL2)
with open(tabela, "r") as leitor:
reader = csv.DictReader(leitor, delimiter=';')
for linha in reader:
folder = linha['folder']
try:
browser.find_element_by_id('field').send_keys(folder)
browser.find_element_by_id('save').click()
except:
with open('falied.txt', 'a') as wirter:
writer.write(folder)
writer.close()
browser.quit()
if __name__ == '__main__':
Web()
from now on he needs to be running the code inside the page
this code is an example similar to my original code
Replace your part of code with the code below:
if __name__ == '__main__':
while True:
try:
Web()
Except:
browser.quit()
As you can see we're calling it in while True which means it'll run indefinitely browser.quit() will close the selenium.

Python - Put a list in a threading module

I want to put a list into my threading script, but I am facing a problem.
Contents of list file (example):
http://google.com
http://yahoo.com
http://bing.com
http://python.org
My script:
import codecs
import threading
import sys
import requests
from time import time as timer
from timeout import timeout
import time
try:
with codecs.open(sys.argv[1], mode='r', encoding='ascii', errors='ignore') as iiz:
iiz=iiz.read().splitlines()
except IOError:
pass
oz = list(iiz)
def nnn(url):
hzz = {'param1': sys.argv[2], 'param2': sys.argv[3]}
po = requests.post(url,data=hzz)
if po:
print("ok \n")
if __name__ == '__main__':
threads = []
for i in range(1):
t = threading.Thread(target=nnn, args=(oz,))
threads.append(t)
t.start()
Can you please clarify what elaborate on exactly what you're trying to achieve.
I'm guessing that you're trying to request urls to load into a web browser or the terminal...
Also you shouldn't need to put the urls into a list because when you opened up the file containing the urls, it automatically sorted it into a list. So in other words, the contents in iiz are already in the list format.
Personally, I haven't worked much with the modules you're using (apart from time), but I'll try my best to help you and hopefully other users will try and help you too.

Categories