Even if I type "since" keyword in ccxt bitget, always get only the latest information. The same code worked on ftx, what's the problem?
bitget = ccxt.bitget({'apiKey' : self.KEY,
'secret' : self.SECRET_KEY,
'enableRateLimit' : True,
'options' : {'defaultType':'swap'}
})
yyyymmdd = '20220301'
since = int(datetime(int(yyyymmdd[:4]),int(yyyymmdd[4:6]),int(yyyymmdd[6:])).timestamp()*1000)
ohlcv = bitget.fetch_ohlcv('BTC/USDT', '1m', since, limit = 1000)
ohlcv = pd.DataFrame(ohlcv)
ohlcv.columns = ['time','open','high','low','close','volume']
ohlcv['time'] = ohlcv['time'].apply(lambda x : datetime.fromtimestamp(x/1000).strftime('%Y%m%d %H:%M'))
time open high low close volume
0 20220322 14:36 42957.24 42959.97 42927.88 42927.88 1.8439
1 20220322 14:37 42927.88 42957.04 42927.88 42951.36 1.2933
2 20220322 14:38 42951.36 42951.36 42928.46 42932.59 0.6664
3 20220322 14:39 42932.59 42938.0 42916.22 42916.22 2.0336
4 20220322 14:40 42916.22 42916.22 42891.29 42897.49 2.0132
5 20220322 14:41 42897.49 42900.14 42880.96 42884.51 1.6279
6 20220322 14:42 42884.51 42893.26 42870.46 42870.46 2.3478
.
.
.
How can I get specific time period information ?
maybe since=since
also millisecond unix
since = int(datetime.datetime.strptime("2021-05-18 11:20:00+00:00", "%Y-%m-%d %H:%M:%S%z").timestamp() * 1000)
EDIT: looks like your unix was right my bad
Related
I want to know if inputed date of birth is over 18 or under.
def is_under_18(birth):
now = date.today()
return (
now.year - birth.year < 18
or now.year - birth.year == 18 and (
now.month < birth.month
or now.month == birth.month and now.day <= birth.day
)
)
And then:
year = int(input("Year born: "))
month = int(input("Month born: "))
day = int(input("Day born: "))`
birth = date(year,month,day)
if is_under_18(birth):
print('Under 18')
else:
print('Adult')
However, the only thing is, say I add a user which his birthday is the 25th of November 2004. The program lets me add it because it does not count the month. If I add a user which was born the 1st of January 2005, it doesn't allow me because 2022-2005=17.
Your original code doesn't seem to have a problem with the dates you mention, but does have a bug as Nov 22, 2004 is "Under 18" and today's date is Nov 22, 2022 (18th birthday). Use now.day < birth.day instead.
But if you compute the birthday required to be 18 by replacing today's year with 18 less, then directly compare the dates, you don't have to have a complicated comparison:
from datetime import date
def is_under_18(birth):
# today = date.today()
today = date(2022,11,22) # for repeatability of results
born_on_or_before = today.replace(year=today.year - 18)
return birth > born_on_or_before
print(f'Today is {date.today()}')
for year,month,day in [(2004,11,21), (2004,11,22), (2004,11,23), (2004,11,25), (2005,1,1)]:
birth = date(year,month,day)
if is_under_18(birth):
print(f'{birth} Under 18')
else:
print(f'{birth} Adult')
Output:
Today is 2022-11-22
2004-11-21 Adult
2004-11-22 Adult
2004-11-23 Under 18
2004-11-25 Under 18
2005-01-01 Under 18
I'm trying to convert my UNIX timestamp to the US/Central timezone timestamp, but i keep getting the UTC output. I don't know what i'm doing wrong in the code.
import ccxt
import pandas as pd
from dateutil import tz
binance = ccxt.binance({
'enableRateLimit': True,
'apiKey': 'xxxxxxxxxxxxxxxxxxx',
'secret': 'xxxxxxxxxxxxx'
})
symbol = 'ETHUSDT'
timeframe = '15m'
limit = 500
bars = binance.fetch_ohlcv (symbol, timeframe = timeframe, limit = limit)
df = pd.DataFrame(bars, columns = ['timestamp','open','high','low', 'close', 'volume'])
df['timestamp'] = pd.to_datetime(df['timestamp'], unit = 'ms').dt.tz_localize(tz='US/Central')
df['timestamp'] = pd.to_datetime(df['timestamp'], unit = 'ms').dt.tz_convert(tz='US/Central')
print(df)
timestamp open high low close volume
0 2022-11-21 12:15:00-06:00 1120.63 1122.74 1118.26 1119.31 3278.5060
1 2022-11-21 12:30:00-06:00 1119.30 1127.48 1115.10 1125.31 11065.4442
2 2022-11-21 12:45:00-06:00 1125.32 1128.36 1123.92 1127.30 5447.6054
3 2022-11-21 13:00:00-06:00 1127.30 1136.75 1125.67 1133.81 15977.1500
4 2022-11-21 13:15:00-06:00 1133.82 1146.99 1132.77 1139.39 21009.7356
.. ... ... ... ... ... ...
495 2022-11-26 16:00:00-06:00 1210.90 1212.87 1208.77 1212.54 3822.1327
496 2022-11-26 16:15:00-06:00 1212.55 1213.92 1212.09 1213.63 2414.2695
497 2022-11-26 16:30:00-06:00 1213.62 1213.63 1211.05 1212.89 2461.4644
498 2022-11-26 16:45:00-06:00 1212.89 1212.94 1209.00 1209.76 2544.8965
499 2022-11-26 17:00:00-06:00 1209.75 1210.00 1207.74 1209.77 1638.1446
I think you want.
df["timestamp"] = (
pd.to_datetime(df["timestamp"], unit="ms")
.dt.tz_localize("UTC")
.dt.tz_convert("US/Central")
.dt.tz_localize(None)
)
I have two API.
Australia API- This API works only for year 1985 to 2024.
USA API- I wanted this API need should work only before 1985.
Taking 4 things from user.
-Start Year
-End Year
-latitude
-longitude
sample command: python test.py -latitude '' -longitude '' -startYear '' -endYear ''
User can enter 3 ways of input.
Case 1. Start year=before 1985, end year= After 1985 ---->both AUSTRALIA and USA api run.
Case 2. Start year=At 1985 or later, end year= after 1985 ---->only AUSTRALIA api should run.
Case 3. Start year=before 1985, end year=before 1985 ------>only USA api run
Problem is that I am not able to figure out how to write the code for Case 1 after writing the code for case 2(Australia API) and case 3(USA API).
import requests
import json
import argparse
import time
import pandas as pd
import warnings
warnings.filterwarnings("ignore")
parser = argparse.ArgumentParser(description="Process some integers.")
parser.add_argument("-latitude", help="Latitude(Degress)")
parser.add_argument("-longitude", help="Longitude(Degress)")
parser.add_argument("-startYear", help="Start of the Year")
parser.add_argument("-endYear", help="End of the Year")
parser.add_argument("--verbose", help="display processing information")
start = time.time()
def main(latitude,longitude,startYear,endYear,verbose):
parameters = {
"latd": latitude, # [deg]
"latm": 00, # [deg]
"lats": 00, # [deg]
"lond": longitude, # [deg]
"lonm": 00, # [deg]
"lons": 00, # [deg]
"elev" : 00, # [km]
"year" : None, # [YYYY]
"month" : '07', # [MM]
"day": '01', # [DD]
"Ein": 'D' # [Model]
}
hostname = "https://api.geomagnetism.ga.gov.au/agrf"
hostname1 = "http://www.ngdc.noaa.gov/geomag-web/calculators/calculateDeclination?%s"
df_1=pd.DataFrame()
for year in range(startYear, endYear):
if (startYear>=1985 and endYear>1985):
-----
elif (startYear<1985 and endYear<1985):
-------
if endYear < 1985:
if startYear < 1985:
# Case 3
elif startYear >= 1985:
# Case 2
elif startYear < 1985:
# Case 1
else:
# Case where endYear < 1985 and startYear > 1985 (probably an input error)
if end > 1985:
australia
if start < 1985:
usa
else:
usa
Newbie alert, please bear
the data i have right now after running the line of code below is
Input-
data = pd.DataFrame(alice._AliceBlue__master_contracts_by_symbol)
Output-
Index NSE
1018GS2026 GS Instrument(exchange='NSE', token=6833, symbol='1018GS2026 GS', name='GOI LOAN 10.18%
2026', expiry=None, lot_size='1')
1025GS2021 GS Instrument(exchange='NSE', token=6819, symbol='1025GS2021 GS', name='GOI LOAN 10.25%
2021', expiry=None, lot_size='1')
116GS2020 GS Instrument(exchange='NSE', token=6814, symbol='116GS2020 GS', name='GOI LOAN 11.60%
2020', expiry=None, lot_size='1')
182D010721 TB Instrument(exchange='NSE', token=1776, symbol='182D010721 TB', name='GOI TBILL 182D-
01/07/21', expiry=None, lot_size='100')
182D020921 TB Instrument(exchange='NSE', token=2593, symbol='182D020921 TB', name='GOI TBILL 182D-
02/09/21', expiry=None, lot_size='100')
I want a dataframe like this from inside the above dataframe
Index Exchange token symbol name expiry lot_size
1018GS2026 GS NSE 6833 1018GS2026 GS GOI LOAN 10.18% 2026 None 1
1025GS2021 GS NSE 6819 1025GS2021 GS GOI LOAN 10.25% 2021 None 1
116GS2020 GS NSE 6814 116GS2020 GS GOI LOAN 11.60% 2020 None 1
182D010721 TB NSE 1776 182D010721 TB GOI TBILL 182D-01/07/21 None 100
182D020921 TB NSE 2593 182D020921 TB GOI TBILL 182D-02/09/21 None 100
Any suggestions guys?, what should i do?
IF the master_contract has the index as its key, and an Instrument object as its value, then it should be easy to convert this.
rows = []
for val in alice._AliceBlue__master_contracts_by_symbol.values():
row.append( [val.exchange, val.token, val.symbol, val.name,
val.expiry, val.lot_size] )
df = pd.DataFrame( rows,
index=alice._AliceBlue__master_contracts_by_symbol.keys(),
columns=['exchange', 'token', 'symbol', 'name', 'expiry', 'lot_size']
)
Edit:
If the AliceBlue thing is really an OrderedDict of OrderedDicts, then it is even easier:
df = pd.DataFrame(
alice._AliceBlue__master_contracts_by_symbol.values(),
index=alice._AliceBlue__master_contracts_by_symbol.keys()
)
The correct way is below
df = pd.DataFrame(alice._AliceBlue__master_contracts_by_symbol)
df = pd.json_normalize([x._asdict() for x in df['NSE']]).set_index(df.index)
Thanks everyone for the help
So I have several log files, they are structured like this:
Sep 9 12:42:15 apollo sshd[25203]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=189.26.255.11
Sep 9 12:42:15 apollo sshd[25203]: pam_succeed_if(sshd:auth): error retrieving information about user ftpuser
Sep 9 12:42:17 apollo sshd[25203]: Failed password for invalid user ftpuser from 189.26.255.11 port 44061 ssh2
Sep 9 12:42:17 apollo sshd[25204]: Received disconnect from 189.26.255.11: 11: Bye Bye
Sep 9 19:12:46 apollo sshd[30349]: Did not receive identification string from 199.19.112.130
Sep 10 03:29:48 apollo unix_chkpwd[4549]: password check failed for user (root)
Sep 10 03:29:48 apollo sshd[4546]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=221.12.29.170 user=root
Sep 10 03:29:51 apollo sshd[4546]: Failed password for root from 221.12.29.170 port 56907 ssh2
There are more dates and times, But this is an example. I was wondering how I would calculate the total time that the file covers. I've tried a few things, and have had about 5 hours of no success.
I tried this first, and it was close, but it didn't work like I wanted it to, it kept repeating dates:
with open(filename, 'r') as file1:
lines = file1.readlines()
for line in lines:
linelist = line.split()
date2 = int(linelist[1])
time2 = linelist[2]
print linelist[0], linelist[1], linelist[2]
if date1 == 0:
date1 = date2
dates.append(linelist[0] + ' ' + str(linelist[1]))
if date1 < date2:
date1 = date2
ttimes.append(datetime.strptime(str(ltime1), FMT) - datetime.strptime(str(time1), FMT))
time1 = '23:59:59'
ltime1 = '00:00:00'
dates.append(linelist[0] + ' ' + str(linelist[1]))
if time2 < time1:
time1 = time2
if time2 > ltime1:
ltime1 = time2
If the entries are in a chronological order, you can just look at the first and at the last entry:
entries = lines.split("\n")
first_date = entries[0].split("apollo")[0]
last_date = entries[len(entries)-1].split("apollo")[0]
We don't have the year, so I took the current year. Read all the lines, convert the month to month index, and parse each date.
Then sort it (so works even if logs mixed) and take first & last item. Substract. Enjoy.
from datetime import datetime
months = ["","Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec"]
current_year = datetime.now().year
dates = list()
with open(filename, 'r') as file1:
for line in file1:
linelist = line.split()
if linelist: # filter out possible empty lines
linelist[0] = str(months.index(linelist[0])) # convert 3-letter months to index
date2 = int(linelist[1])
z=datetime.strptime(" ".join(linelist[0:3])+" "+str(current_year),"%m %d %H:%M:%S %Y") # compose & parse the date
dates.append(z) # store in list
dates.sort() # sort the list
first_date = dates[0]
last_date = dates[-1]
# print report & compute time span
print("start {}, end {}, time span {}".format(first_date,last_date,last_date-first_date))
result:
start 2016-09-09 12:42:15, end 2016-09-10 03:29:51, time span 14:47:36
Note that it won't work properly between december 31st and january the 1st because of the missing year info. I suppose we could make a guess if we find January & December in the log then assume that it's january from the next year. Unsupported yet.