String Formatting in Python/Thunderbird - python

Noob, trying to use Thunderbird (rather than SMTP) to send personalized emails to a few dozen people. I am basically looking to have the message display in Thunderbird as follows:
Dear Bob,
It was nice to meet you the other day.
However, I instead end up with:
Dear Bob (comma missing, and rest of body missing)
I have tried the following:
import subprocess
import os
def send_email(name, email_address):
#print(name, email_address)
os.system("thunderbird -compose to= 'to',subject='subject',body='body'")
tbirdPath = r'c:\Program Files (x86)\Mozilla Thunderbird\thunderbird.exe'
to = email_address
subject = 'Test Subject LIne'
#body = "Dear %s, \n\n This is the body." %(name)
body = 'html><body>Dear %s, This is the body <br></body></html>'%(name)
composeCommand = 'format=html,to={},subject={},body={}'.format(to, subject, body)
subprocess.Popen([tbirdPath, '-compose', composeCommand])
As always, simple answers I can implement are preferred to complex ones I cannot. I suspect I'm missing something stupid about string formatting, but am unsure as to exactly what. Thanks in advance for your help.

From this example, you may need to surround the arguments with single and double quotes.
Like this:
composeCommand = '"format=html,to=\'{}\',subject=\'{}\',body=\'{}\'"'.format(to, subject, body)
By the way, if you are using python 3.6+, using f-strings makes str more readable:
body = f'<html><body>Dear {name}, This is the body <br></body></html>'
composeCommand = f'"format=html,to=\'{to}\',subject=\'{subject}\',body=\'{body}\'"'

So here is a simple program to read in names and email addresses from a CSV file, and to automate drafting emails from your Thunderbird client (you will still need to hit send on each), using Python on a Windows machine.
import csv
import subprocess
import os
# a list that will contain OrderedDict ('Name', 'Bob'), ('email', bob#yahoo.com)
each_persons_info = []
def load_email_info(data_file):
"""
Load data from CSV files into memory.
"""
# Load people
with open(f"email_list.csv", encoding="utf-8") as f:
reader = csv.DictReader(f)
# using DictReader, starts reading at row 2, with row 1 forming your labels, append to each_persons_info list (differs from csv reader in that respect)
for row in reader:
each_persons_info.append(row)
def send_email(name, email_address):
"""
Launches Thunderbird and drafts personalized emails to people on your list, using content you supply in subject and body fields below.
"""
subject = 'Test Subject LIne'
body = "Dear {}".format(name) + '\n' + '\n' + "This is the body." + '\n' + '\n' + "The End." + '\n'
to = email_address
os.system("thunderbird -compose to= 'to',subject='subject',body='body'")
tbirdPath = r'c:\Program Files (x86)\Mozilla Thunderbird\thunderbird.exe'
composeCommand = "format=html,to={},subject={},body='{}'".format(to, subject, body)
subprocess.Popen([tbirdPath, '-compose', composeCommand])
def main():
load_email_info("email_list.csv")
# walk each person through the send email function
for item in each_persons_info:
send_email(name, email_address)
if __name__ == "__main__":
main()

Related

How do I create new JSON data after every script run

I have JSON data stored in the variable data.
I want to make it write to a text file after every time it runs so I know which data json that is new instead of re-writting the same Json.
Currently, I am trying this:
Saving = firstname + ' ' + lastname+ ' - ' + email
with open('data.json', 'a') as f:
json.dump(Saving, f)
f.write("\n")
which just adds up to the json file and the beginning of the script where the first code starts, I clean it with
Infotext = "First name : Last name : Email"
with open('data.json', 'w') as f:
json.dump(Infotext, f)
f.write("\n")
How can I make instead of re-write the same Json, instead create new file with Infotext information and then add up with Saving?
Output in Json:
"First name : Last name : Email"
Hello World - helloworld#test.com
Hello2 World - helloworld2#test.com
Hello3 World - helloworld3#test.com
Hello4 World - helloworld4#test.com
Thats the outprint I wish to be. So basically it needs to start with
"First name : Last name : Email"
And then the Names, Lastname Email will add up below that until there is no names anymore.
So basically easy to say now - What I want is that instead of clearing and add to the same json file which is data.json, I want it to create to a newfile called data1.json - then if I rerun the program again tommorow etc - it gonna be data2.json and so on.
Just use a datetime in the file name, to create a unique file each time the code is run. In this case, granularity goes down to per-second so, if the code is run more than once per second, you will overwrite the existing contents of a file. In that case, step down to file names with microseconds in their name.
import datetime as dt
import json
time_script_run = dt.datetime.now().strftime('%Y_%m_%d_%H_%M_%S')
with open('{}_data.json'.format(time_script_run), 'w') as outfile:
json.dump(Infotext, outfile)
This has multiple drawbacks:
You'll have an ever-growing number of files
Even if you load the file with the latest datetime in its name (and finding that file grows in run time), you can only see data as it was in the single time before the last run; the full history is very difficult to look up.
I think you're better using a light-weight database such as sqlite3:
import sqlite3
import random
import time
import datetime as dt
# Create DB
with sqlite3.connect('some_database.db') as conn:
c = conn.cursor()
# Just for this example, we'll clear the whole table to make it repeatable
try:
c.execute("DROP TABLE user_emails")
except sqlite3.OperationalError: # First time you run this code
pass
c.execute("""CREATE TABLE IF NOT EXISTS user_emails(
datetime TEXT,
first_name TEXT,
last_name TEXT,
email TEXT)
""")
# Now let's create some fake user behaviour
for x in range(5):
now = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
c.execute("INSERT INTO user_emails VALUES (?, ?, ?, ?)",
(now, 'John', 'Smith', random.randint(0, 1000)))
time.sleep(1) # so we get new timestamps
# Later on, doing some work
with sqlite3.connect('some_database.db') as conn:
c = conn.cursor()
# Get whole user history
c.execute("""SELECT * FROM user_emails
WHERE first_name = ? AND last_name = ?
""", ('John', 'Smith'))
print("All data")
for row in c.fetchall():
print(row)
print('...............................................................')
# Or, let's get the last email address
print("Latest data")
c.execute("""
SELECT * FROM user_emails
WHERE first_name = ? AND last_name = ?
ORDER BY datetime DESC
LIMIT 1;
""", ('John', 'Smith'))
print(c.fetchall())
Note: the data retrieval runs really quickly in this code, it only takes ~5 secs to run because I use time.sleep(1) in generating the fake user data.
The JSON file should contain a list of strings. You should read the current contents of the file into a variable, append to the variable, then rewrite the file.
with open("data.json", "r") as f:
data = json.load(f)
data.append(firstname + ' ' + lastname+ ' - ' + email)
with open("data.json", "w") as f:
json.dump(data, f)
I think what you could do is to use seek() for files and write in the related position of the json file . for example you need to update firstname , you seek for the : after firstname , and update the text there.
There are examples here :
https://www.tutorialspoint.com/python/file_seek.htm

write list of paragraph tuples to a csv file

The following code is designed to write a tuple, each containing a large paragraph of text, and 2 identifiers behind them, to a single line per each entry.
import urllib2
import json
import csv
base_url = "https://www.eventbriteapi.com/v3/events/search/?page={}
writer = csv.writer(open("./data/events.csv", "a"))
writer.writerow(["description", "category_id", "subcategory_id"])
def format_event(event):
return event["description"]["text"].encode("utf-8").rstrip("\n\r"), event["category_id"], event["subcategory_id"]
for x in range(1, 2):
print "fetching page - {}".format(x)
formatted_url = base_url.format(str(x))
resp = urllib2.urlopen(formatted_url)
data = resp.read()
j_data = json.loads(data)
events = map(format_event, j_data["events"])
for event in events:
#print event
writer.writerow(event)
print "wrote out events for page - {}".format(x)
The ideal format would be to have each line contain a single paragraph, followed by the other fields listed above, yet here is a screenshot of how the data comes out.
If instead I this line to the following:
writer.writerow([event])
Here is how the file now looks:
It certainly looks much closer to what I want, but its got parenthesis around each entry which are undesirable.
EDIT
here is a snippet that contains a sample of the data Im working with.
Can you try writing to the CSV file directly without using using the csv module? You can write/append comma-delimited strings to the CSV file just like writing to typical text files. Also, the way you deal with removing \r and \n characters might not be working. You can use regex to find those characters and replace them with an empty string "":
import urllib2
import json
import re
base_url = "https://www.eventbriteapi.com/v3/events/search/?page={}"
def format_event(event):
ws_to_strip = re.compile(r"(\r|\n)")
description = re.sub(ws_to_strip, "", event["description"]["text"].encode("utf-8"))
return [description, event["category_id"], event["subcategory_id"]]
with open("./data/events.csv", "a") as events_file:
events_file.write(",".join(["description", "category_id", "subcategory_id"]))
for x in range(1, 2):
print "fetching page - {}".format(x)
formatted_url = base_url.format(str(x))
resp = urllib2.urlopen(formatted_url)
data = resp.read()
j_data = json.loads(data)
events = map(format_event, j_data["events"])
for event in events:
events_file.write(",".join(event))
print "wrote out events for page - {}".format(x)
Change your csv writer to be DictWriter.
Make a few tweaks:
def format_event(event):
return {"description": event["description"]["text"].encode("utf-8").rstrip("\n\r"),
"category_id": event["category_id"],
"subcategory_id": event["subcategory_id"]}
May be a few other small things you need to do, but using DictWriter and formatting your data appropriately has been the easiest way to work with csv files that I've found.

PYTHON/OUTLOOK Sending e-mails through PYTHON with DOCX

I have to send mails through python. It works. It is almost done. The only problem is that I have to keep the formatting too. So either I have to send e mail as HTML (and then rewrite template with html instead of .docx) OR copy .docx file with extension
Anybody has any ideas how to do this? Thanks guys.
import win32com.client as win32
import fileinput as fi
from docx import Document
outlook = win32.Dispatch('outlook.application')
path_in = 'maillist.csv'
input_file = open(path_in, 'r')
document = Document('template.docx')
document_html = open('template.html', 'r')
print(temp)
def filecount(fname):
for line in fi.input(fname):
pass
return fi.lineno()
print("Total mails %s" % (filecount(path_in)))
count = 0
for line in input_file:
if (count>16):
name = line.split(";")[0]
mail_adress = line.split(";")[1]
subject = line.split(";")[2]
print ("%s:%s:%s:" % (name, mail_adress, subject))
mail = outlook.CreateItem(0)
mail.To = mail_adress
mail.Subject = subject
mail.body = temp.replace("XXXNAMEXXX", name)
mail.send
else:
count+=1
Try adding the .RTFBody and/or .HTMLBody methods to the document objects :
document = Document('template.docx').RTFBody
document_html = open('template.html', 'r').HTMLBody
Also, I'm not sure if it makes much of a difference but, for convention's sake, I like to capitalize the first letter of the method for the mailItem object.
Let me know if that works.

Matching keyword from text file containing gmail imap dump Python 3, windows 8

I have two scripts;
The first checks my email folder and writes what it gets to a text file:
def email_checker():
import imaplib
import textwrap
mail = imaplib.IMAP4_SSL('imap.gmail.com')
mail.login('myemail#gmail.com', 'mypassword')
mail.list()
mail.select('inbox')
typ, data = mail.search(None, 'ALL')
my_file = open("email.txt", "w")
for num in data[0].split():
typ, newb = mail.fetch(num, '(RFC822)')
print ('Message %s\n%s\n' % (num, newb[0][1]))
my_file.write('Message %s\n%s\n' % (num, newb[0][1]))
#my_out = textwrap.wrap(str(('Message %s\n %s\n' % (num, newb[0][1]))),70)
#my_file.write(my_out)
mail.close()
my_file.close()
The second functions is then supposed to check this file to find a keyword.
def keyword_match (keyword):
with open("email.txt", "r") as openfile:
for line in openfile:
for part in line.split():
if part == keyword:
print ("working")
return True
else:
return False
The problem is that the keyword matcher does not return True even when using words I know exist in the email.
I have tried to use textwrap (see commented out code) but it gives me an error TypeError: must be str, not list
I am using Python 3, on a windows 8, 64 bit machine.
Update:
Ok so I created a new text file with "This is my test sentence" and then ran the keyword match on it for "test" and that works.
The print to in email_checker seems to work fine and prints out lots of detail about the email and the contents which contain the keywords being searched for.
The strings in the text file do seem to contain the contents so I don't think its a case of missing content in the txt file as I originally thought.
Ok I think I have fixed it. My updated code looks like this and seems to work.
def keyword_match (keyword):
with open("email.txt", "r") as openfile:
for line in openfile:
for part in line.split():
if part == keyword:
print ("working")
return True
break
else:
return False
Final Update: Sorry I am trying to put the code in but it seems to have a mind of its own when it comes to what to accept as a code snippet. But essentially I put a break after the if and moved the else two steps out so as to make it a "for else loop"

How can I parse a formatted file into variables using Python?

I have a pre-formatted text file with some variables in it, like this:
header one
name = "this is my name"
last_name = "this is my last name"
addr = "somewhere"
addr_no = 35
header
header two
first_var = 1.002E-3
second_var = -2.002E-8
header
As you can see, each score starts with the string header followed by the name of the scope (one, two, etc.).
I can't figure out how to programmatically parse those options using Python so that they would be accesible to my script in this manner:
one.name = "this is my name"
one.last_name = "this is my last name"
two.first_var = 1.002E-3
Can anyone point me to a tutorial or a library or to a specific part of the docs that would help me achieve my goal?
I'd parse that with a generator, yielding sections as you parse the file. ast.literal_eval() takes care of interpreting the value as a Python literal:
import ast
def load_sections(filename):
with open(filename, 'r') as infile:
for line in infile:
if not line.startswith('header'):
continue # skip to the next line until we find a header
sectionname = line.split(None, 1)[-1].strip()
section = {}
for line in infile:
if line.startswith('header'):
break # end of section
line = line.strip()
key, value = line.split(' = ', 1)
section[key] = ast.literal_eval(value)
yield sectionname, section
Loop over the above function to receive (name, section_dict) tuples:
for name, section in load_sections(somefilename):
print name, section
For your sample input data, that results in:
>>> for name, section in load_sections('/tmp/example'):
... print name, section
...
one {'last_name': 'this is my last name', 'name': 'this is my name', 'addr_no': 35, 'addr': 'somewhere'}
two {'first_var': 0.001002, 'second_var': -2.002e-08}
Martijn Pieters is correct in his answer given your preformatted file, but if you can format the file in a different way in the first place, you will avoid a lot of potential bugs. If I were you, I would look into getting the file formatted as JSON (or XML), because then you would be able to use python's json (or XML) libraries to do the work for you. http://docs.python.org/2/library/json.html . Unless you're working with really bad legacy code or a system that you don't have access to, you should be able to go into the code that spits out the file in the first place and make it give you a better file.
def get_section(f):
section=[]
for line in f:
section += [ line.strip("\n ") ]
if section[-1] == 'header': break
return section
sections = dict()
with open('input') as f:
while True:
section = get_section(f)
if not section: break
section_dict = dict()
section_dict['sname'] = section[0].split()[1]
for param in section[1:-2]:
k,v = [ x.strip() for x in param.split('=')]
section_dict[k] = v
sections[section_dict['sname']] = section_dict
print sections['one']['name']
You can also access these sections as attributes:
class Section:
def __init__(self, d):
self.__dict__ = d
one = Section(sections['one'])
print one.name

Categories