Quandl issue on change and data for latest day - python

I've been working on the quandl API recently and I've been stuck on an issue for a while.
My question is how to create a method on the difference between One
date and the date before for a stock index, Data seems to come out as
an array as an example: [[u'2015-04-30', 17840.52]] for the Dow Jones
Industrial Average. I'd like to also create a way to get the change
from one day away from the latest one. Say getting Friday's stock and
the change between that and the day before.
My code:
def fetchData(apikey, url):
'''Returns JSON data of the Dow Jones Average.'''
parameters = {'rows' : 1, 'auth_token' : apikey}
req = requests.get(url, params=parameters)
data = json.loads(req.content)
parsedData = []
stockData = {}
for datum in data:
if data['code'] == 'COMP':
stockData['name'] = data['name']
stockData['description'] = '''The NASDAQ Composite Index measures all
NASDAQ domestic and international based common type stocks listed on The NASDAQ Stock Market.'''
stockData['data'] = data['data']
stockData['code'] = data['code']
else:
stockData['name'] = data['name']
stockData['description'] = data['description']
stockData['data'] = data['data']
stockData['code'] = data['code']
parsedData.append(stockData)
return parsedData
I've attempted to just tack on [1] on data to get just the current day but both the issue of getting the day before has kinda stumped me.

Related

def Function Is Not Displaying Expected Result

I’m trying to get stock prices to buy or sell on specific date. When the buy price, given the sell price should be NAN. Likewise, if the sell-price is given, the buy price has to be NAN. This function and coding is originally proposed by Joseph Hart (https://medium.com/analytics-vidhya/sma-short-moving-average-in-python-c656956a08f8).
The return values of the function are (sig_buy_price, sig_sell_price). My data source is Pandas DataFrame, namely qqq_df. SMA_30 and SMA_100 are samples drawn from qqq_df.
The output is not giving me the expected result, which is stated above. Please find the code indicated below. I need specific steps and codes, to resolve the issue. I look forward to hearing from forum members. Thanks.
def buy_sell(qqq_df):
sig_price_buy = []
sig_price_sell = []
flag = -1
for i in range(len(qqq_df)):
if qqq_df['sma_30'][i] > qqq_df['sma_100'][i]:
if flag != 1:
sig_price_buy.append(qqq_df['close'] [i])
sig_price_sell.append(np.nan)
print(qqq_df['date'][i])
else:
sig_price_buy.append(np.nan)
sig_price_buy.append(np.nan)
elif qqq_df['sma_30'][i] < qqq_df['sma_100'][i]:
if flag != 0:
sig_price_buy.append(np.nan)
sig_price_sell.append(qqq_df ['close'] [i])
print(qqq_df['date'][i])
flag = 0
else:
sig_price_buy.append(np.nan)
sig_price_sell.append(np.nan)
else:
sig_price_buy.append(np.nan)
sig_price_sell.append(np.nan)
return(sig_price_buy, sig_price_sell)
b, s = buy_sell(qqq_df = qqq_df)
print(b, s)
The following code worked for me:
def buy_sell(data):
sig_price_buy = []
sig_price_sell = []
flag = -1
for i in range(len(data)):
if data['SMA30'][i] >= data['SMA100'][i] and flag != 1:
sig_price_buy.append(data['TSLA'][i])
sig_price_sell.append(np.nan)
flag = 1
elif data['SMA30'][i] <= data['SMA100'][i] and flag != 0:
sig_price_buy.append(np.nan)
sig_price_sell.append(data['TSLA'][i])
flag = 0
else:
sig_price_buy.append(np.nan)
sig_price_sell.append(np.nan)
return (sig_price_buy, sig_price_sell)
The format of the input data must be:
TSLA = pd.read_csv("TSLA.csv")
close = TSLA.Close
data = pd.DataFrame()
data['SMA30'] = close.rolling(window = 30).mean()
data['SMA100'] = close.rolling(window = 100).mean()
data['TSLA'] = close
I tested it on the TSLA stocks, so nevermind the different naming.
If you want to print it in a certain format, which I read in one of your comments, I recommend first running the function and then printing the resulting arrays formatted the way you like in a second step.
EDIT:
If you wish to access a yahoo finance stock directly from python, you can use the following framework:
import datetime
from os import mkdir, path
from urllib import request
def convertDateToYahooTime(date: datetime.datetime) -> int: #converts a date to a POSIX for the yahoo finance website
return int(date.replace(hour=2, minute=0, second=0, microsecond=0).timestamp())
def downloadStock(stock: str, olderDate: datetime.datetime, recentDate: datetime.datetime) -> str: #downloads a stock from yahoo finance and returns the filename
p1: int = convertDateToYahooTime(olderDate)
p2: int = convertDateToYahooTime(recentDate)
link = f"https://query1.finance.yahoo.com/v7/finance/download/{stock}?period1={p1}&period2={p2}&interval=1d&events=history&includeAdjustedClose=true"
file = f"{stock}_{olderDate.date().strftime('%Y-%m-%d')}_{recentDate.date().strftime('%Y-%m-%d')}"
if not path.exists('stocks'):
mkdir('stocks')
response = request.urlretrieve(link, f'stocks//{file}') #save the stock to a folder named 'stocks'
return file
def fetchStock(stock: str, period: int) -> str: #fetches a stock and returns the filename
today = datetime.datetime.today().replace(hour=0, minute=0, second=0, microsecond=0)
past = datetime.datetime.fromtimestamp(today.timestamp() - period*86400)
return downloadStock(stock, past, today)
downladStock accesses the download link to any yahoo finance stock you want. fetchStock just makes your life easier as you can enter the name of your stock and the period of how many days of the stock you wish to access, counting backward from the present day. So if you do this:
STOCK = 'GOOG'
PERIOD = 1000
file = fetchStock(STOCK, PERIOD)
df = pd.read_csv('stocks//' + file)
df will be the google stock of the past 1000 days. You can do this to load any stock you want. after this, you can repeat the steps I mentioned above to format the stock and run the SMA30/100 algorithm on it.
I'm not sure whether this is what you wanted, but I hope it helps and you have fun with it.

Why does Twints "since" & "until" not work?

I'm trying to get all tweets from 2018-01-01 until now from various firms.
My code works, however I do not get the tweets from the time range. Sometimes I only get the tweets from today and yesterday or from mid April up to now, but not since the beginning of 2018. I've got then the message: [!] No more data! Scraping will stop now.
ticker = []
#read in csv file with company ticker in a list
with open('C:\\Users\\veron\\Desktop\\Test.csv', newline='') as inputfile:
for row in csv.reader(inputfile):
ticker.append(row[0])
#Getting tweets for every ticker in the list
for i in ticker:
searchstring = (f"{i} since:2018-01-01")
c = twint.Config()
c.Search = searchstring
c.Lang = "en"
c.Panda = True
c.Custom["tweet"] = ["date", "username", "tweet"]
c.Store_csv = True
c.Output = f"{i}.csv"
twint.run.Search(c)
df = pd. read_csv(f"{i}.csv")
df['company'] = i
df.to_csv(f"{i}.csv", index=False)
Does anyone had the same issues and has some tip?
You need to add the configuration parameter Since separately. For example:
c.Since = "2018-01-01"
Similarly for Until:
c.Until = "2017-12-27"
The official documentation might be helpful.
Since (string) - Filter Tweets sent since date, works only with twint.run.Search (Example: 2017-12-27).
Until (string) - Filter Tweets sent until date, works only with twint.run.Search (Example: 2017-12-27).

Td Ameritrade download historical data with endDate startDate

I can't figure out how to get data for a given day. Using the annual line in my code, I know the milisecond value of give date.
1612159200000.00 AAPL 2/1/2021 6:00
1612418400000.00 AAPL 2/4/2021 6:00
But putting these value in the code doesn't work
data=get_price_history(symbol=i, endDate=1612418400000 , startDate=1612159200000, frequency=1, frequencyType='daily')
import requests
import pandas as pd
import time
import datetime
# tickers_list= ['AAPL', 'AMGN', 'AXP']
# print(len(tickers_list))
key = '****'
def get_price_history(**kwargs):
url = 'https://api.tdameritrade.com/v1/marketdata/{}/pricehistory'.format(kwargs.get('symbol'))
params = {}
params.update({'apikey': key})
for arg in kwargs:
parameter = {arg: kwargs.get(arg)}
params.update(parameter)
return requests.get(url, params=params).json()
tickers_list= ['AAPL', 'AMGN','WMT']
for i in tickers_list:
# get data 1 year 1 day frequency -- good
# data=get_price_history(symbol=i, period=1, periodType='year', frequency=1, frequencyType='daily')
data=get_price_history(symbol=i, endDate=1612418400000 , startDate=1612159200000, frequency=1, frequencyType='daily')
historical['date'] = pd.to_datetime(historical['datetime'], unit='ms')
info=pd.DataFrame(data['candles'])
historical=pd.concat([historical,info])
historical
From the Ameritrade Price History API documentation:
6 Months / 1 Day, including today's data:
https://api.tdameritrade.com/v1/marketdata/XYZ/pricehistory?periodType=month&frequencyType=daily&endDate=1464825600000
Note that periodType=month is specified because the default periodType is day which is not compatible with the frequencyType daily
So it seems that this line in your code:
data=get_price_history(symbol=i, endDate=1612418400000 , startDate=1612159200000, frequency=1, frequencyType='daily')
is missing a valid periodType parameter. Try:
data=get_price_history(symbol=i, endDate=1612418400000 , startDate=1612159200000, frequency=1, periodType='month', frequencyType='daily')
step1: you need a valid session.
step2: you could use the tda api funcion get_price_history()
see example that I successfully used to get daily data given a start and end date
from tda.auth import easy_client
# need a valid refresh token to use easy_client
Client = easy_client(
api_key='APIKEY',
redirect_uri='https://localhost',
token_path='/tmp/token.json')
# getting the daily data given a a date
# get daily data given start and end dat
resp = Client.get_price_history('AAPL',
period_type=Client.PriceHistory.PeriodType.YEAR,
start_datetime= datetime(2019,9,30),
end_datetime= datetime(2019,10,30) ,
frequency_type=Client.PriceHistory.FrequencyType.DAILY,
frequency=Client.PriceHistory.Frequency.DAILY)
assert resp.status_code == httpx.codes.OK
history = resp.json()
aapl = pd.DataFrame(history)

ValueError: timestamp out of range for platform localtime()/gmtime() function

I have a class assignment to write a python program to download end-of-day data last 25 years the major global stock market indices from Yahoo Finance:
Dow Jones Index (USA)
S&P 500 (USA)
NASDAQ (USA)
DAX (Germany)
FTSE (UK)
HANGSENG (Hong Kong)
KOSPI (Korea)
CNX NIFTY (India)
Unfortunately, when I run the program an error occurs.
File "C:\ProgramData\Anaconda2\lib\site-packages\yahoofinancials__init__.py", line 91, in format_date
form_date = datetime.datetime.fromtimestamp(int(in_date)).strftime('%Y-%m-%d')
ValueError: timestamp out of range for platform localtime()/gmtime() function
If you see below, you can see the code that I have written. I'm trying to debug my mistakes. Can you help me out please? Thanks
from yahoofinancials import YahooFinancials
import pandas as pd
# Select Tickers and stock history dates
index1 = '^DJI'
index2 = '^GSPC'
index3 = '^IXIC'
index4 = '^GDAXI'
index5 = '^FTSE'
index6 = '^HSI'
index7 = '^KS11'
index8 = '^NSEI'
freq = 'daily'
start_date = '1993-06-30'
end_date = '2018-06-30'
# Function to clean data extracts
def clean_stock_data(stock_data_list):
new_list = []
for rec in stock_data_list:
if 'type' not in rec.keys():
new_list.append(rec)
return new_list
# Construct yahoo financials objects for data extraction
dji_financials = YahooFinancials(index1)
gspc_financials = YahooFinancials(index2)
ixic_financials = YahooFinancials(index3)
gdaxi_financials = YahooFinancials(index4)
ftse_financials = YahooFinancials(index5)
hsi_financials = YahooFinancials(index6)
ks11_financials = YahooFinancials(index7)
nsei_financials = YahooFinancials(index8)
# Clean returned stock history data and remove dividend events from price history
daily_dji_data = clean_stock_data(dji_financials
.get_historical_stock_data(start_date, end_date, freq)[index1]['prices'])
daily_gspc_data = clean_stock_data(gspc_financials
.get_historical_stock_data(start_date, end_date, freq)[index2]['prices'])
daily_ixic_data = clean_stock_data(ixic_financials
.get_historical_stock_data(start_date, end_date, freq)[index3]['prices'])
daily_gdaxi_data = clean_stock_data(gdaxi_financials
.get_historical_stock_data(start_date, end_date, freq)[index4]['prices'])
daily_ftse_data = clean_stock_data(ftse_financials
.get_historical_stock_data(start_date, end_date, freq)[index5]['prices'])
daily_hsi_data = clean_stock_data(hsi_financials
.get_historical_stock_data(start_date, end_date, freq)[index6]['prices'])
daily_ks11_data = clean_stock_data(ks11_financials
.get_historical_stock_data(start_date, end_date, freq)[index7]['prices'])
daily_nsei_data = clean_stock_data(nsei_financials
.get_historical_stock_data(start_date, end_date, freq)[index8]['prices'])
stock_hist_data_list = [{'^DJI': daily_dji_data}, {'^GSPC': daily_gspc_data}, {'^IXIC': daily_ixic_data},
{'^GDAXI': daily_gdaxi_data}, {'^FTSE': daily_ftse_data}, {'^HSI': daily_hsi_data},
{'^KS11': daily_ks11_data}, {'^NSEI': daily_nsei_data}]
# Function to construct data frame based on a stock and it's market index
def build_data_frame(data_list1, data_list2, data_list3, data_list4, data_list5, data_list6, data_list7, data_list8):
data_dict = {}
i = 0
for list_item in data_list2:
if 'type' not in list_item.keys():
data_dict.update({list_item['formatted_date']: {'^DJI': data_list1[i]['close'], '^GSPC': list_item['close'],
'^IXIC': data_list3[i]['close'], '^GDAXI': data_list4[i]['close'],
'^FTSE': data_list5[i]['close'], '^HSI': data_list6[i]['close'],
'^KS11': data_list7[i]['close'], '^NSEI': data_list8[i]['close']}})
i += 1
tseries = pd.to_datetime(list(data_dict.keys()))
df = pd.DataFrame(data=list(data_dict.values()), index=tseries,
columns=['^DJI', '^GSPC', '^IXIC', '^GDAXI', '^FTSE', '^HSI', '^KS11', '^NSEI']).sort_index()
return df
Your problem is your datetime stamps are in the wrong format. If you look at the error code it clugely tells you:
datetime.datetime.fromtimestamp(int(in_date)).strftime('%Y-%m-%d')
Notice the int(in_date) part?
It wants the unix timestamp. There are several ways to get this, out of the time module or the calendar module, or using Arrow.
import datetime
import calendar
date = datetime.datetime.strptime("1993-06-30", "%Y-%m-%d")
start_date = calendar.timegm(date.utctimetuple())
* UPDATED *
OK so I fixed up to the dataframes portion. Here is my current code:
# Select Tickers and stock history dates
index = {'DJI' : YahooFinancials('^DJI'),
'GSPC' : YahooFinancials('^GSPC'),
'IXIC':YahooFinancials('^IXIC'),
'GDAXI':YahooFinancials('^GDAXI'),
'FTSE':YahooFinancials('^FTSE'),
'HSI':YahooFinancials('^HSI'),
'KS11':YahooFinancials('^KS11'),
'NSEI':YahooFinancials('^NSEI')}
freq = 'daily'
start_date = '1993-06-30'
end_date = '2018-06-30'
# Clean returned stock history data and remove dividend events from price history
daily = {}
for k in index:
tmp = index[k].get_historical_stock_data(start_date, end_date, freq)
if tmp:
daily[k] = tmp['^{}'.format(k)]['prices'] if 'prices' in tmp['^{}'.format(k)] else []
Unfortunately I had to fix a couple things in the yahoo module. For the class YahooFinanceETL:
#staticmethod
def format_date(in_date, convert_type):
try:
x = int(in_date)
convert_type = 'standard'
except:
convert_type = 'unixstamp'
if convert_type == 'standard':
if in_date < 0:
form_date = datetime.datetime(1970, 1, 1) + datetime.timedelta(seconds=in_date)
else:
form_date = datetime.datetime.fromtimestamp(int(in_date)).strftime('%Y-%m-%d')
else:
split_date = in_date.split('-')
d = date(int(split_date[0]), int(split_date[1]), int(split_date[2]))
form_date = int(time.mktime(d.timetuple()))
return form_date
AND:
# private static method to scrap data from yahoo finance
#staticmethod
def _scrape_data(url, tech_type, statement_type):
response = requests.get(url)
soup = BeautifulSoup(response.content, "html.parser")
script = soup.find("script", text=re.compile("root.App.main")).text
data = loads(re.search("root.App.main\s+=\s+(\{.*\})", script).group(1))
if tech_type == '' and statement_type != 'history':
stores = data["context"]["dispatcher"]["stores"]["QuoteSummaryStore"]
elif tech_type != '' and statement_type != 'history':
stores = data["context"]["dispatcher"]["stores"]["QuoteSummaryStore"][tech_type]
else:
if "HistoricalPriceStore" in data["context"]["dispatcher"]["stores"] :
stores = data["context"]["dispatcher"]["stores"]["HistoricalPriceStore"]
else:
stores = data["context"]["dispatcher"]["stores"]["QuoteSummaryStore"]
return stores
You will want to look at the daily dict, and rewrite your build_data_frame function, which it should be a lot simpler now since you are working with a dictionary already.
I am actually the maintainer and author of YahooFinancials. I just saw this post and wanted to personally apologize for the inconvenience and let you all know I will be working on fixing the module this evening.
Could you please open an issue on the module's Github page detailing this?
It would also be very helpful to know which version of python you were running when you encountered these issues.
https://github.com/JECSand/yahoofinancials/issues
I am at work right now, however as soon as I get home in ~7 hours or so I will attempt to code a fix and release it. I'll also work on the exception handling. I try my best to maintain this module, but my day (and often night time) job is rather demanding. I will report back with the final results of these fixes and publish to pypi when it is done and stable.
Also if anyone else has any feedback or personal fixes made you can offer, it would be a huge huge help in fixing this. Proper credit will be given of course. I am also in desperate need of contributers, so if anyone is interested in that as well let me know. I am really wanting to take YahooFinancials to the next level and have this project become a stable and reliable alternative for free financial data for python projects.
Thank you for your patience and for using YahooFinancials.

Using pandas to scrape weather data from wundergound

I came across a very useful set of scripts on the Shane Lynn for the
Analysis of Weather data. The first script, used to scrape data from Weather Underground, is as follows:
import requests
import pandas as pd
from dateutil import parser, rrule
from datetime import datetime, time, date
import time
def getRainfallData(station, day, month, year):
"""
Function to return a data frame of minute-level weather data for a single Wunderground PWS station.
Args:
station (string): Station code from the Wunderground website
day (int): Day of month for which data is requested
month (int): Month for which data is requested
year (int): Year for which data is requested
Returns:
Pandas Dataframe with weather data for specified station and date.
"""
url = "http://www.wunderground.com/weatherstation/WXDailyHistory.asp?ID={station}&day={day}&month={month}&year={year}&graphspan=day&format=1"
full_url = url.format(station=station, day=day, month=month, year=year)
# Request data from wunderground data
response = requests.get(full_url, headers={'User-agent': 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36'})
data = response.text
# remove the excess <br> from the text data
data = data.replace('<br>', '')
# Convert to pandas dataframe (fails if issues with weather station)
try:
dataframe = pd.read_csv(io.StringIO(data), index_col=False)
dataframe['station'] = station
except Exception as e:
print("Issue with date: {}-{}-{} for station {}".format(day,month,year, station))
return None
return dataframe
# Generate a list of all of the dates we want data for
start_date = "2016-08-01"
end_date = "2016-08-31"
start = parser.parse(start_date)
end = parser.parse(end_date)
dates = list(rrule.rrule(rrule.DAILY, dtstart=start, until=end))
# Create a list of stations here to download data for
stations = ["ILONDON28"]
# Set a backoff time in seconds if a request fails
backoff_time = 10
data = {}
# Gather data for each station in turn and save to CSV.
for station in stations:
print("Working on {}".format(station))
data[station] = []
for date in dates:
# Print period status update messages
if date.day % 10 == 0:
print("Working on date: {} for station {}".format(date, station))
done = False
while done == False:
try:
weather_data = getRainfallData(station, date.day, date.month, date.year)
done = True
except ConnectionError as e:
# May get rate limited by Wunderground.com, backoff if so.
print("Got connection error on {}".format(date))
print("Will retry in {} seconds".format(backoff_time))
time.sleep(10)
# Add each processed date to the overall data
data[station].append(weather_data)
# Finally combine all of the individual days and output to CSV for analysis.
pd.concat(data[station]).to_csv("data/{}_weather.csv".format(station))
However, I get the error:
Working on ILONDONL28
Issue with date: 1-8-2016 for station ILONDONL28
Issue with date: 2-8-2016 for station ILONDONL28
Issue with date: 3-8-2016 for station ILONDONL28
Issue with date: 4-8-2016 for station ILONDONL28
Issue with date: 5-8-2016 for station ILONDONL28
Issue with date: 6-8-2016 for station ILONDONL28
Can anyone help me with this error?
The data for the chosen station and the time period is available, as shown at this link.
The output you are getting is because an exception is being raised. If you added a print e you would see that this is because import io was missing from the top of the script. Secondly, the station name you gave was out by one character. Try the following:
import io
import requests
import pandas as pd
from dateutil import parser, rrule
from datetime import datetime, time, date
import time
def getRainfallData(station, day, month, year):
"""
Function to return a data frame of minute-level weather data for a single Wunderground PWS station.
Args:
station (string): Station code from the Wunderground website
day (int): Day of month for which data is requested
month (int): Month for which data is requested
year (int): Year for which data is requested
Returns:
Pandas Dataframe with weather data for specified station and date.
"""
url = "http://www.wunderground.com/weatherstation/WXDailyHistory.asp?ID={station}&day={day}&month={month}&year={year}&graphspan=day&format=1"
full_url = url.format(station=station, day=day, month=month, year=year)
# Request data from wunderground data
response = requests.get(full_url)
data = response.text
# remove the excess <br> from the text data
data = data.replace('<br>', '')
# Convert to pandas dataframe (fails if issues with weather station)
try:
dataframe = pd.read_csv(io.StringIO(data), index_col=False)
dataframe['station'] = station
except Exception as e:
print("Issue with date: {}-{}-{} for station {}".format(day,month,year, station))
return None
return dataframe
# Generate a list of all of the dates we want data for
start_date = "2016-08-01"
end_date = "2016-08-31"
start = parser.parse(start_date)
end = parser.parse(end_date)
dates = list(rrule.rrule(rrule.DAILY, dtstart=start, until=end))
# Create a list of stations here to download data for
stations = ["ILONDONL28"]
# Set a backoff time in seconds if a request fails
backoff_time = 10
data = {}
# Gather data for each station in turn and save to CSV.
for station in stations:
print("Working on {}".format(station))
data[station] = []
for date in dates:
# Print period status update messages
if date.day % 10 == 0:
print("Working on date: {} for station {}".format(date, station))
done = False
while done == False:
try:
weather_data = getRainfallData(station, date.day, date.month, date.year)
done = True
except ConnectionError as e:
# May get rate limited by Wunderground.com, backoff if so.
print("Got connection error on {}".format(date))
print("Will retry in {} seconds".format(backoff_time))
time.sleep(10)
# Add each processed date to the overall data
data[station].append(weather_data)
# Finally combine all of the individual days and output to CSV for analysis.
pd.concat(data[station]).to_csv(r"data/{}_weather.csv".format(station))
Giving you an output CSV file starting as follows:
,Time,TemperatureC,DewpointC,PressurehPa,WindDirection,WindDirectionDegrees,WindSpeedKMH,WindSpeedGustKMH,Humidity,HourlyPrecipMM,Conditions,Clouds,dailyrainMM,SoftwareType,DateUTC,station
0,2016-08-01 00:05:00,17.8,11.6,1017.5,ESE,120,0.0,0.0,67,0.0,,,0.0,WeatherCatV2.31B93,2016-07-31 23:05:00,ILONDONL28
1,2016-08-01 00:20:00,17.7,11.0,1017.5,SE,141,0.0,0.0,65,0.0,,,0.0,WeatherCatV2.31B93,2016-07-31 23:20:00,ILONDONL28
2,2016-08-01 00:35:00,17.5,10.8,1017.5,South,174,0.0,0.0,65,0.0,,,0.0,WeatherCatV2.31B93,2016-07-31 23:35:00,ILONDONL28
If you are not getting a CSV file, I suggest you add a full path to the output filename.

Categories