I'm novice to programming and learning python3.
Recently I'm trying to make cryptocurrency trading system using binance's api.
Here's the api document.
The logic and explanation about timestamp in the document is as follows :
Timestamp, to be sent which should be the millisecond timestamp of when the request was created and sent.
if (timestamp < serverTime && (serverTime - timestamp) <= recvWindow)
{ // process request } else { // reject request }
According to this logic, the time I sent the request should be less than the time on the server. The problem is that I have not passed this logic.
When I call time.time() and server time using this code,
import requests
import simplejson as json
import time
base_url = "https://api.binance.com"
servertime_endpoint="/api/v1/time"
url = base_url + servertime_endpoint
t = time.time()*1000
r = requests.get(url)
result = json.loads(r.content)
print(int(t)-result["serverTime"])
time.time() is bigger than server time so that I get return from last sentence with positive value. What should I do?
This is most likely due to the operating system you are running using a clock with a lower resolution than the one the server is running. When running on a Linux or Mac OS, Python uses a system call for time.time() that returns time down to microsecond resolution (or better). When running on a Windows machine, it only returns time down to millisecond resolution.
You can check the resolution of the time.time() function by programming a busy loop and waiting until the time changes: use the code in this incredibly useful answer to see what your resolution is.
If you are running on an OS with a resolution of ~0.001 second (1 millisecond) while the server is reporting times at a resolution of ~0.000001 second (1 microsecond), then even if your clocks were exactly in sync and there is zero network latency, you would still expect your time to be ahead of the server time on 50% of the calls simply due to quantization noise. For instance, if the server reports a time of 12345.678501 (microsecond resolution), your machine would report a time of 12345.679 (millisecond resolution) and appear to be 499 microseconds ahead.
Some quick solutions are to:
check if the server time rounds to your machine time and call that acceptable even if it appears your time is ahead of the server time;
subtract 500 microseconds to your time to guarantee that quantization noise can't put you ahead of the server;
increase the timing threshold by 500 microseconds and check that the absolute value of the difference between your time and the server time are within the bounds;
run your code on a operating system with a higher resolution system clock.
Related
I am having an issue figuring out how to write a publisher node (ROS Python) that will send the subscriber the actual time (IE: 12:46) which I need to do for my homework. I need to publish the actual time, so that the subscriber can use that time, and a difference in time requested through a service to calculate the time somewhere else in the world. (Ie: Node1 publishes time in New York, Node2 subscribes and requests the time difference between current location-New York- and London. Node1 sends the information on the time difference to Node2. Node2 takes current time and time difference and adds them to find out the current time in London)
I have googled the issue and I cannot find any helpful information. I have found some confusing sources that say how to get the simulated time in seconds, and (maybe) the clock time in seconds, but I did not really understand what they were saying enough to use the information.
code that I have so far: Sorry, IDK how to format it right on this website
#!/usr/bin/env python
from beginner_tutorials.srv import TimeDiff, TimeDifResponse
from std_msgs.msg import Time
import rospy
pub = rospy.Publisher('currentTime', Time, queue_size=10)
CurrentTime= localtime()
def setupPublisher():
rospy.init_node('talker', anonymous=True)
rate = rospy.Rate(5) # 5hz
while not rospy.is_shutdown():
global CurrentTime
CurrentTime= localtime()
pub.publish(CurrentTime)
rate.sleep()
if __ name __ == "__ main __":
setupPublisher()
I don't really have any code to share because I don't know how to incorporate the time. This is what I have so far, but I don't think it's right
It should publish the time to the subscriber, but I have no idea what it is doing, or what variables I should be using, or anything for that matter. We didn't learn about this in class at all.
You should be familiar with all the ROS pre-defined message types (although some are significantly more used than others), including the std_msgs std_msgs/Header message. It includes the std_msgs/Time message and a frame_id (String) term, for which you could store the location of the time.
Also, the ROS time type (for which there is a std_msg Time wrapping it), needs to be acquired from the appropriate ROS (rospy) method, not localtime() (although you could make the time from the localtime() if you wanted).
For more time/Time references, here is the overview, client time libraries, and python / rospy specifics. The gist is the three (completely equivalent) functions:
rospy.Time.now() #get time as rospy.Time instance
rospy.get_rostime() #get time as rospy.Time instance, same as .now()
rospy.get_time() #get time as float secs
Remember the standard for the "current time" is seconds from 1970 UTC/GMT (timezone +-0). rospy.Time holds to this as it uses python's time.time() for the current ROS time (which is just a tuple of the seconds since and extra nanoseconds from the second). This will allow you to use the rest of the python tools to format it as you wish. ROS also allows a "simulated time" with the Clock, as you saw, but that is a fun feature for simulation that isn't what you get by default when you use Time (no need to worry).
In python, by importing datetime module and using various functions of class datetime.datetime we could get basic dates with formatting and even date arithmetic for deployment.
For example, datetime.datetime.now() will return today's date.
But, today when I run this program there was no internet connection in my computer but still it outputs today's date.
So, how datetime.datetime.now() could return proper date? Is the algorithm automatically increments after 24 hours time ?
tl;dr datetime.datetime.now() uses the clock built into your computer.
Computers have been able to keep fairly accurate time for much longer than the Internet has existed.
For example, PCs feature what's called a real-time clock (RTC). It is battery-powered and can keep the time even when the computer is switched off.
Interestingly, some distributed algorithms require very accurate clocks in order to operate reliably. The required accuracy far exceeds anything that a simple oscillator-based clock can provide.
As a result, companies like Google operate GPS and atomic clocks in their data centres (and even those are not without potential issues, as was demonstrated, for example, on 26 January 2017, when some GPS clocks were out by 13 microseconds for ten hours).
Even though the data centres are connected to the Internet, neither GPS nor atomic clocks require an Internet connection to operate. Besides, someone needs to keep all that Internet time infrastructure running... it can't be that everyone gets their time "off the Internet". ;)
Now that we're on the subject of distributing the time across computer networks, the main protocols for doing that are NTP (Network Time Protocol) and PTP (Precision Time Protocol).
The documentation for datetime.datetime.now() does not state the time is received from the internet.
Return the current local date and time. If optional argument tz is
None or not specified, this is like today(), but, if possible,
supplies more precision than can be gotten from going through a
time.time() timestamp (for example, this may be possible on platforms
supplying the C gettimeofday() function).
If tz is not None, it must be an instance of a tzinfo subclass, and
the current date and time are converted to tz’s time zone. In this
case the result is equivalent to
tz.fromutc(datetime.utcnow().replace(tzinfo=tz)). See also today(),
utcnow().
The datetime is received from the computer time, if you are running windows for example, try to change time from the window and the python will print the time that you changed.
check its documentation : https://docs.python.org/2/library/datetime.html
I need to count the number of seconds that have passed between the execution of some code on a Raspberry Pi. Normally I'd do it as follows in Python:
start = time.time()
execute_my_function()
end = time.time()
elapsed = end - start
However, the Raspberry Pi doesn't include an RTC and instead relies on NTP. This means that for the first little while after booting, the system time is January 1, 1970, and so the difference between "end" and "start" often becomes about 47 years.
How do I measure the elapsed time in seconds if the system time is unreliable (from what I can gather, the "timeit" module relies on "time" and thus won't work either)? It doesn't have to be completely accurate--a second or two too much or too little is fine.
Edit: I've made a sort of hack where I read /proc/uptime which I believe is independent of the system time, but I kind of feel dirty this way. I'm hoping there is a somewhat less OS dependent solution.
You could have your program wait until start_time has a meaningful value:
while time.time() < 1e6:
time.sleep(10)
The Amazon API limit is apparently 1 req per second or 3600 per hour. So I implemented it like so:
while True:
#sql stuff
time.sleep(1)
result = api.item_lookup(row[0], ResponseGroup='Images,ItemAttributes,Offers,OfferSummary', IdType='EAN', SearchIndex='All')
#sql stuff
Error:
amazonproduct.errors.TooManyRequests: RequestThrottled: AWS Access Key ID: ACCESS_KEY_REDACTED. You are submitting requests too quickly. Please retry your requests at a slower rate.
Any ideas why?
This code looks correct, and it looks like 1 request/second limit is still actual:
http://docs.aws.amazon.com/AWSECommerceService/latest/DG/TroubleshootingApplications.html#efficiency-guidelines
You want to make sure that no other process is using the same associate account. Depending on where and how you run the code, there may be an old version of the VM, or another instance of your application running, or maybe there is a version on the cloud and other one on your laptop, or if you are using a threaded web server, there may be multiple threads all running the same code.
If you still hit the query limit, you just want to retry, possibly with the TCP-like "additive increase/multiplicative decrease" back-off. You start by setting extra_delay = 0. When request fails, you set extra_delay += 1 and sleep(1 + extra_delay), then retry. When it finally succeeds, set extra_delay = extra_delay * 0.9.
Computer time is funny
This post is correct in saying "it varies in a non-deterministic manner" (https://stackoverflow.com/a/1133888/5044893). Depending on a whole host of factors, the time measured by a processor can be quite unreliable.
This is compounded by the fact that Amazon's API has a different clock than your program does. They are certainly not in-sync, and there's likely some overlap between their "1 second" time measurement and your program's. It's likely that Amazon tries to average out this inconsistency, and they probably also allow a small bit of error, maybe +/- 5%. Even so, the discrepancy between your clock and theirs is probably triggering the ACCESS_KEY_REDACTED signal.
Give yourself some buffer
Here are some thoughts to consider.
Do you really need to hit the Amazon API every single second? Would your program work with a 5 second interval? Even a 2-second interval is 200% less likely to trigger a lockout. Also, Amazon may be charging you for every service call, so spacing them out could save you money.
This is really a question of "optimization" now. If you use a constant variable to control your API call rate (say, SLEEP = 2), then you can adjust that rate easily. Fiddle with it, increase and decrease it, and see how your program performs.
Push, not pull
Sometimes, hitting an API every second means that you're polling for new data. Polling is notoriously wasteful, which is why Amazon API has a rate-limit.
Instead, could you switch to a queue-based approach? Amazon SQS can fire off events to your programs. This is especially easy if you host them with Amazon Lambda.
I want to measure time in miliseconds, that this line took:
before=datetime.datetime.now()
response = urllib2.urlopen("https://www.google.com")
after=datetime.datetime.now()
It is supposed to be kind of workaround for server, which doesn't ping back, so I have to measure it from the server response.
I can get the string back string 0:00:00.034225 if I deduct two times and I am able to grab miliseconds as a substring, but I would like to get miliseconds in some cleaner way (whole difference in ms, including time converted from seconds, if the server responds with really big delay).
after - before is a datetime.timedelta object whose total_seconds method will give you what you are looking for. You can find additional information in the Python docs.
You will just have to multiply by 1000 to get milliseconds. Don't worry, although the method is called total_seconds, it includes milliseconds as decimal places. Sample output:
>>> d = t1 - t0
>>> d.total_seconds()
2.429001
This won't give you a timeout though, only a mesurement of the duration.
urlopen allows you to pass a timeout parameter, and will automatically abort after that much time has elapsed. From the docs:
urllib2.urlopen(url[, data][, timeout])
The optional timeout parameter specifies a timeout in seconds for blocking operations like the connection attempt (if not specified, the
global default timeout setting will be used). This actually only works
for HTTP, HTTPS and FTP connections.
Python actually has a mechanism for timing small pieces of code -- timeit.Timer -- but that's for performance profiling and testing, not for implementing your own timeouts.