I need to download only emails from the IMAP server which are older than a certain Linux timestamp. The Linux time stamp has to be to the second. This gets updated as per the server time.
Example :
datetime.now().strftime('%d-%b-%Y %H:%M:%s')
'21-Dec-2012 16:50:1356088844
Problem code:
result, data = imap_server.search(None, '(SINCE '+datetime.now().strftime('%d-%b-%Y')+')')
The search function only takes in 'SINCE + DATE'. It won't take a full date time argument. It only takes the date example '21-Dec-2012'
How can I solve this?
Related
I am doing research on AWS OpenSearch and one of the things I'm trying to measure is run times or execution times for different queries andindex commands. For example how long does it take to perform an action such as a query search, create index, delete index?
Right now I am using the awswrangler Python library for interacting with OpenSearch.API for that library here
Read Index Code I currently have:
awswrangler.opensearch.search(client=self.client, index="index_name", search_body=any_dsl_query,size=100)
awswrangler.opensearch.search_by_sql(client=self.client, sql_query="SELECT * from index_name limit 100")
Delete Index Code:
awswrangler.opensearch.delete_index(client=self.client, index="index_name")
Create Index Code (this one actually returns Elapsed time as desired):
awswrangler.opensearch.index_csv(client=self.client, path=csv_file_path, index="index_name")
Unfortunately none of these except Create Index return the runtime out of the box.
I know that I can create my own timer script to get the runtime but I don't want to do this client side, because then that would include my network, latency in the execution time. Is there anyway to do this with OpenSearch?
I couldn't find a way in the awswrangler python library I was using, or with any other method so far.
I was able to resolve this by using the python requests library and looking at the "took" value in the response, which is the time it took to run the query in ms. Here is the code I used to get this working:
sample_sql_query = "SELECT * FROM <index_name> LIMIT 5"
sql_result = requests.post("<opensearch_domain>/_plugins/_sql?format=json", auth=(username, password), data=json.dumps({"query":sample_sql_query}), headers=headers).json()
print (sql_result)
I am doing a personal project to properly understand python. Recently started coding.
The below code gets me the last email from a particular folder and then it applies regular expressions to the body and It fill some details in a webform.
The part am stuck at is to get live emails or emails for the next hours. Is there a way to keep the program running so it fetches live emails for the next few hours one after the other rather than last,
import win32com.client
import re
from selenium import webdriver
from datetime import datetime,timedelta
import time
outlook = win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI")
inbox = outlook.Folders['keelanur.nishanth#nxp.com'].Folders['Inbox'].Folders['NXDI Maintenace LSF']
messages = inbox.Items
message = messages.GetLast()
body_content = message.Body
#Regular expression code
...
#Selenium web driver code for form filling.
Would like to know if there is any way to keep the code running so it gets one email after another for the next few hours.
I tried to make changes to code for the past few hours and days which I found on stackover flow but was unable to. Could someone please help me.
Have a nice day :)
You need to use the Find/FindNext or Restrict methods of the Items class. Read more about that in the following articles:
How To: Use Find and FindNext methods to retrieve Outlook mail items from a folder (C#, VB.NET)
How To: Use Restrict method to retrieve Outlook mail items from a folder
Outlook evaluates date-time values according to the time format, short date format, and long date format settings in the Regional and Language Options applet in the Windows Control Panel. In particular, Outlook evaluates time according to that specified time format without seconds. If you specify seconds in the date-time comparison string, the filter will not operate as expected.
Although dates and times are typically stored with a date format, filters using the Jet and DAV Searching and Locating (DASL) syntax require that the date-time value to be converted to a string representation. In Jet syntax, the date-time comparison string should be enclosed in either double quotes or single quotes. In DASL syntax, the date-time comparison string should be enclosed in single quotes.
To make sure that the date-time comparison string is formatted as Microsoft Outlook expects, use the Visual Basic for Applications Format function (or its equivalent in your programming language).
For example, most probably you will be interested in the
'This filter uses urn:schemas:httpmail namespace
strFilter = AddQuotes("urn:schemas:httpmail:datereceived") _
& " > '" & datStartUTC & "' AND " _
& AddQuotes("urn:schemas:httpmail:datereceived") _
& " < '" & datEndUTC & "'"
Most probably you will be interested in the MailItem.ReceivedTime property value to sort items (it returns a date indicating the date and time at which the item was received).
The Filtering Items Using a Date-time Comparison article shows how to construct a search criteria in depth.
Would like to know if there is any way to keep the code running so it gets one email after another for the next few hours.
You can run extract emails for each hour in the outer loop. In the inner loop you can iterate over the email found for a particular hour. Otherwise, you may consider running a timer. In the tick event you may process each hour, i.e. emails for a particular hour.
I'm using Instagram-API-python to create an application. I'm getting a JSON response with below value.
'device_timestamp': 607873890651
I tried to convert this value to readable using python.
import time
readable = time.ctime(607873890651)
print(readable)
It gives following result and seems it is not correct.
Sun Oct 3 16:00:51 21232
I'm not much familiar with the Instagram-API-python. Please someone can help me to solve this problem.
The data is very likely to be incorrect.
Timestamp is a very standard way to store a date-time. Counting the seconds that passed since January 1st, 1970, also known as the UNIX Epoch.
I looked for "Instagram 'device_timestamp'" on Google and all the user-provided values made sense, but yours doesn't.
This is probably an error from the database, it happens.
Use the mentioned ctime conversion, but take the 'taken_at' field if available.
Don't use device_timestamp but use taken_at field. Then taken_at need multiply to 1000.
In Java it looks like this
Date data = new Date(taken_at * 1000);
My server's timezone and the data that I have fetched via the following span two consecutive hours. Once the hour changes, the hour that python syntax is getting is not found on the server that is providing the content, since the server jumps to the next hour while the data is not processed yet. In case you are wondering the data in question is weather model data in .grib2 format.
I have the following code now:
# /usr/bin/python
import time
# Save your URL to a variable:
url = time.strftime("http://nomads.ncep.noaa.gov/pub/data/nccf/nonoperational/com/hrrr/para/hrrr.%Y%m%d/hrrr.t%Hz.wrfnatf04.grib2")
# Save that string to a file:
with open('hrrr/hrrrf4.txt', 'a') as f: f.write(url+'\n')
Is there a way to 'lag' the &H variable in the above URL one hour, or another method that will delay it to ensure a smooth data processing for all desired hours?
Thank you for taking the time to answer my question.
The code below would print out the datetime of now, and then offset it by subtracting 1 hour, you could also add an hour, or minutes, seconds, etc.... I scrape lots of forums that are in different timezones than my scraping server and that's how I adjust anyway. This also helps if the servers clock is off a little bit too, you could adjust the time back of forward however much you need.
import datetime
timenow = datetime.datetime.now()
timeonehourago = timenow - datetime.timedelta(hours=1)
url = timenow.strftime("http://nomads.ncep.noaa.gov/pub/data/nccf/nonoperational/com/hrrr/para/hrrr.%Y%m%d/hrrr.t%Hz.wrfnatf04.grib2")
offseturl = timeonehourago.strftime("http://nomads.ncep.noaa.gov/pub/data/nccf/nonoperational/com/hrrr/para/hrrr.%Y%m%d/hrrr.t%Hz.wrfnatf04.grib2")
print url
print offseturl
I have this silly question. I analyze data packets with scapy. And there is a variable inside the packet, it's called timestamp (TSFT) , which is the time that the packet was constructed. So i grab that vairable (packet[RadioTap].TSFT) but I do not know if the value is in nanoseconds or in microseconds.
Could anyone inform me ? I haven't seen it anywhere.
Thanks in advance.
It's in microseconds. From http://www.radiotap.org/defined-fields/TSFT :
Value in microseconds of the MAC's 64-bit 802.11 Time Synchronization
Function timer when the first bit of the MPDU arrived at the MAC. For
received frames only.
The timestamp is in seconds.
You can import datetime in python and use its fromtimestamp method to get it in a easier to read format like so.
import datetime
ts = datetime.datetime.fromtimestamp(1305354670.602149)
print ts
2011-05-14 02:31:10.602149
Hope this helped.