I have a similar question as this one but the solution there did not apply to my problem. I can connect and send commands to my Keysight B1500 mainframe, via pyvisa/GPIB. The B1500 is connected via Keysight's IO tool "Connection Expert"
rman = pyvisa.ResourceManager()
keyS = rman.open_resource('GPIB0::18::INSTR')
keyS.timeout = 20000 # time in ms
keyS.chunk_size = 8204800 # 102400 is 100 kB
keyS.write('*rst; status:preset; *cls')
print('variable keyS is being assigned to ',keyS.query('*IDN?'))
Using this pyvisa object I can query without issues (*IDN? above provides the expected output), and I have also run and extracted data from a different type of IV curve on the same tool.
However, when I try to run a pulsed voltage sweep (change voltage of pulses as function of time and measure current) I do not get the measured data out from the tool. I can hook the output lead from the B1500 to an oscilloscope and can see that my setup has worked and the tool is behaving as expected, right up until I try to extract sweep data.
Again, I can run a standard non-pulsed sweep on the tool and the data extraction works fine using [pyvisaobject].read_raw() - so something is different with the way I'm pulsing the voltage.
What I'm looking for is a way to interrogate the connection in cases where the data transfer is unsuccessful.
Here, in no particular order, are the ways I've tried to extract data. These methods are suggested in this link:
keyS.query_ascii_values('CURV?')
or
keyS.read_ascii_values()
or
keyS.query_binary_values('CURV?')
or
keyS.read_binary_values()
This link from the vendor does cover the extraction of data, but also doesn't yield data in the read statement in the case of pulsed voltage sweep:
myFieldFox.write("TRACE:DATA?")
ff_SA_Trace_Data = myFieldFox.read()
Also tried (based on tab autocompletion on iPython):
read_raw() # This is the one that works with non-pulsed sweep
and
read_bytes(nbytes)
The suggestion from #Paul-Cornelius is a good one, I had to include an *OPC? to get the previous data transfer to work as well. So right before I attempt the data transfer, I send these lines:
rep = keyS.query('NUB?')
keyS.query('*OPC?')
print(rep,'AAAAAAAAAAAAAAAAAAAAA') # this line prints!
mretholder = keyS.read_raw() # system hangs here!
In all the cases the end result is the same - I get a timeout error:
pyvisa.errors.VisaIOError: VI_ERROR_TMO (-1073807339): Timeout expired before operation completed.
The tracebacks for all of these show that they are all using the same basic framework from:
chunk, status = self.visalib.read(self.session, size)
Hoping someone has seen this before, or at least has some ideas on how to troubleshoot. Thanks in advance!
I don't have access to that instrument, but to read a waveform from a Tektronics oscilloscope I had to do the following (pyvisa module):
On obtaining the Resource, I do this:
resource.timeout = 10000
In some cases, I have to wait for a command to complete before proceeding like this:
resource.query("*opc?")
To transfer a waveform from the scope to the PC I do this:
ascii_waveform = resource.query("wavf?")
The "wavf?" query is specifically for this instrument, but the "*opc?" query is generic ("wait for operation complete"), and so is the method for setting the timeout parameter.
There is a lot of information in the user's guide (on the same site you have linked to) about reading data from various devices. I have used the visa library a few times on different devices, and it always requires some fiddling to get it to work.
It looks like you don't have any time delays in your program. In my experience they are almost always necessary, somehow, either by using the resource.timeout feature, the *opc? query, or both.
Related
There are about millions of rows of data that need to be written to Cassandra.I have tried the following methods:
The first: According to the reference code given by Datastax java-driver or python-driver on GitHub, my code is similar to:
// The following code is fixed, and this part will be omitted later.
String cassandraHost = "******";
String keyspace = "******";
String table = "******";
String insertCqlStr = " insert into " + keyspace + "." + table +"( "
+ "id,date,value)"
+ " values ( ?, ?, ?) ;";
CqlSession session = CqlSession.builder()
.addContactPoint(new InetSocketAddress(cassandraHost, 9042))
.withLocalDatacenter("datacenter1")
.withKeyspace(CqlIdentifier.fromCql(keyspace))
.build();
PreparedStatement preparedStatement = session.prepare(insertCqlStr);
// The code below is changed, or just what I think it is.
for(List<String> row: rows){
session.execute(
preparedInsertStatement.bind(row.get(0),
row.get(1), row.get(2))
.setConsistencyLevel(ConsistencyLevel.ANY));
}
session.close();
This code works fine, but it's just too inefficient to write for me to accept.So I tried the asynchronous API provided by the driver, and the code is almost the same as the above code:
for(List<String> row: rows){
session.executeAsync(
preparedInsertStatement.bind(row.get(0),
row.get(1), row.get(2))
.setConsistencyLevel(ConsistencyLevel.ANY));
}
session.close();
Please excuse my lack of asynchronous programming experience for being so rude. It works, but it has a fatal problem, I found that it does not write all the data into the database. I would like to know the correct usage for calling an async API.
Also, I tried the relevant methods of the BatchStatement provided by the driver. I know this method is officially deprecated to improve performance and it has many limitations. For example, as far as I know, the number of statements in a batch cannot exceed 65535, and in the default configuration, the data length warning limit of batch is 5kb, and the error limit is 50kb. But I kept the number of statements below 65535 and modified the above default configuration:
List<BoundStatement> boundStatements = new ArrayList<>();
Integer count = 0;
BatchStatement batchStatement = BatchStatement.newInstance(BatchType.UNLOGGED);
for (List<String> row : rows){
// The actual code here is looping multiple times instead of exiting directly.
if(count >= 65535){
break;
}
BoundStatement boundStatement = preparedStatement.bind(row.get(0),
row.get(1), row.get(2));
boundStatements.add(boundStatement);
count += 1;
}
BatchStatement batch = batchStatement.addAll(boundStatements);
session.execute(batch.setConsistencyLevel(ConsistencyLevel.ANY));
// session.executeAsync(batch.setConsistencyLevel().ANY);
session.close();
It also works. And it is actually more efficient than asynchronous APIs, and using synchronous interfaces can ensure data integrity. If the asynchronous API is used to execute BatchStatement here, the incomplete data mentioned above will also occur. But this method still doesn't meet my requirements, I need to execute it with multithreading. When I execute multiple threads it gives error:
Caused by: com.datastax.oss.driver.api.core.DriverTimeoutException: Query timed out after PT2S
Summary: I've tried both synchronous and asynchronous writes and Batch related methods, and there are some issues that I can't accept. I now need to know how to properly use the async API to not lose data, and why I'm wrong. As for the BatchStatement related methods, I don't expect it to work, it would be great if you could give me a workable suggestion. Thank you!
Instead of trying to write data loading code yourself, I would recommend to adopt a DSBulk tool that is heavily optimized for loading/unloading data to/from Cassandra. And it's open source, so you can even use it as a Java library.
There are few reason for that:
Writing async code isn't easy - you need to make sure that you aren't sending too many requests over the same connection (Cassandra has a limit on number of the in-flight requests). For driver 3.x you can use something like this, and driver 4.x has built-in rate limiting capabilities
Batch in Cassandra often leads to performance degradation when isn't used correctly. Batch should be used only for submitting the data that belongs to the same partition, otherwise it would cause a higher load on the coordinating node. Plus you also need to implement a custom routing.
DSBulk is doing all of that very efficiently as it was written by people who are working with Cassandra every day in large scale setups.
P.S. in your case, consistency level ANY means that coordinator just acknowledge receiving of data, but doesn't guarantee that it will be written (for example if it's crashed).
I am trying to control the flow of received data from a serial db9 RS232 and I am using the Pyserial.
Normally I should receive 13 bytes data, but I want to be able to control my flow when the transmitter is sending more than 13 bytes. For this I wonder using the flow control options but I am not familiar with that.
So can anyone please give me examples about using the XonXoff, rtscts and dsrdtr?
Thanks for your help!
Screenshot of my output
Serialport flow control controls the buffer size and the amount of data buffered in it to prevent buffer overruns, that is not what you want.
It also does not guarantee exact size or timing.
Your options are to either accept the all data you receive, analyze its contents and cut out the data blocks, or to establish your own protocol with the other device in command/response or ENQ/ACK/NAK format to send only one data block at a time.
In Addition:
Once the weight value is fixed, you can just check if the data is valid until some event occurs, and then discard the data itself. Just because you receive the data does not mean that it must be used by the main application (POS?).
Is the system crash actually happening?
Isn't it the Fear, Uncertainty, or Doubt you have because you don't fully understand the situation?
The system will not crash just by receiving a large amount of unused data.
You just need enough buffers to check the format.
Also, is it really a lot of data? For example, if you receive 13 bytes of data every 50 to 100 ms, it is not a large amount at all.
It's much more likely that there is a bug in the processing of the application that handles it.
That said, if you want to do something, read the weigh scale specifications carefully.
Alternatively, add a description to the question or provide a link to the specification documentation.
If your weigh scale doesn't have the ability to stop sending data with flow control, you're just wasting your time and effort increasing the likelihood of bugs.
If you receive it thousands to millions of times, even if it's all in a different area, it's probably few tens of megabytes at most.
It takes about 1 minute to send and receive video data.
Normally, the same area is reused, so it is impossible to have such a size.
Software/system bugs are ubiquitous, so it's possible that they will crash for some reason, but the weigh scale you're trying to use will probably have a large number of users running.
If the weigh scale works with the current specifications and there are no problems in the store, it means that you don't have to do anything weird and anxious.
If you still want to try it, you can use:
If you want to let the PySerial module or system take control, you can set one of the following to true.
xonxoff
rtscts
dsrdtr
However, the size of the buffer is fixed, and the size and timing for the occurrence of control events cannot be set.
If you want to control it yourself, you can set all the above specifications to False and do the following yourself.
software flow control:
Write XON = 0x11, XOFF = 0x13.
Software flow control
hardware flow control:
Set rts or dtr to True/False.
Actual problem:
I have a controller node that subscribes to 2 topics and publishes to 1 topic. Although, in simulation everything seems to be working as expected, in actual HW, the performance degrades. I suspect the problem is that one of the two input topics is lagging behind the other one by a significant amount of time.
Question:
I want to re-create this behavior in simulation in order to test the robustness of the controller. Therefore, I need to delay one of the topics by a certain amount of time - ideally this should be configurable parameter. I could write a node that has a FIFO memory buffer and adjust the delay-time by monitoring the frequency of the topic. Before I do that, is there a command line tool or any other quick to implement method that I can use?
P.s. I'm using Ubuntu 16.04 and ROS-Kinetic.
I do not know of any out of the box solutions that would do exactly what you describe here.
For a quick hack, if your topic does not have a timestamp and the node just takes in the messages as they arrive the easiest thing to do would be to record a bag and play the two topics back from two different instances of rosbag play. Something like this:
first terminal
rosbag play mybag.bag --clock --topics /my/topic
second terminal started some amount of time later
rosbag play mybag.bag --topics /my/other_topic
Not sure about the --clock flag, whether you need it depends mostly on what you mean by simulation. If you would want to control the time difference more than pressing enter in two different terminals you could write a small bash script to launch them.
Another option that would still involve bags, but would give you more control over the exact time the message is delayed by would be to edit the bag to have the messages in the bag already with the correct delay. Could be done relatively easily by modifying the first example in the rosbag cookbook:
import rosbag
with rosbag.Bag('output.bag', 'w') as outbag:
for topic, msg, t in rosbag.Bag('input.bag').read_messages():
# This also replaces tf timestamps under the assumption
# that all transforms in the message share the same timestamp
if topic == "/tf" and msg.transforms:
outbag.write(topic, msg, msg.transforms[0].header.stamp)
else:
outbag.write(topic, msg, msg.header.stamp if msg._has_header else t)
Replacing the if else with:
...
import rospy
...
if topic == "/my/other_topic":
outbag.write(topic, msg, t + rospy.Duration.from_sec(0.5))
else:
outbag.write(topic, msg, t)
Should get you most of the way there.
Other than that if you think the node would be useful in the future or you want to have it work on live data as well then you would need to implement the the node you described with some queue. One thing you could look at for inspiration could be the topic tools, git for topic tools source.
Hope you're well and thanks for reading.
Been revisiting an old project, leveraging plotly to stream data out of mysql with python in-between the two. I've never had great luck w/ plot.ly (which I'm sure relates more to my understanding than their platform), streams/iframes seem to stall over time and I am not apt enough to troubleshoot completely.
My current symptom is this: Plots arbitrarily stall - I'm pushing data, but the iframe isn't updating.
The current solution is: Refresh the browser every X minutes.
The solution works, but it's aggrevating, because I dont understand why the visual is stalling in the first place (is it me, is it them, etc).
As I was reviewing some of the documentation, specifically this link:
https://plot.ly/streaming/
I noticed they call out NOT continually opening and closing streams, and that heartbeats should be placed every so often to keep things alive/fresh.
Here's what I'm currently calling every 10 minutes:
pullData(mysql)
format data
open(plotly.stream1)
write data to plotly.stream1
close(plotly.stream1)
open(plotly.stream2)
write data to plotly.stream2
close(plotly.stream2)
Based on what I am reading, it sounds like I should actually execute the script once on startup, and keep the streams open, but heartbeat() them every 15 or-so seconds between actual write() calls like this:
open(plotly.stream1)
open(plotly.stream2)
every 10 minutes:
pullData(mysql)
format data
write data to plotly.stream1
write data to plotly.stream2
while not pulling and writing:
every 15 seconds:
heartbeat(plotly.stream1)
heartbeat(plotly.stream2)
if error:
close(plotly.stream1)
close(plotly.stream2)
Please excuse the sudo-mess, I'm just trying to convey an idea. Anyone have any advice? I started on my original path of opening, writing, closing based on the streaming example, but that's a one time write. The other example is a constant stream of data. I'm somewhere in between those two.
Furthermore - is this train of thought even related to the iframe not refreshing? Part of me believes the symptom is unrelated to my idea - the data is getting to plot.ly fine - it's my session that's expiring, or the iframe "connection" that's going stale. If the symptom is unrelated, at least I'll have made my source code a bit cleaner and more appropriate.
Any advice is greatly appreciated!
Thanks
-justin
Plotly will close a stream that is inactive for more than 60 seconds. You must send a newline down the streaming channel (a heartbeat) to keep it open. I recommend every 30 seconds.
Your first code example may not work as expected because the client side websocket (that connects the Plot to our system) may close when your first source stream (the stream that connects your script to our system) exits. When you disconnect a source stream a signal is sent to our system that lets it know your stream is now inactive. If a new source stream does not reconnect quickly we close the client connecting websockets.
Now, when your script gets more data and opens a new stream it will successfully stream data to our system but the client-side websocket, now closed, will not pass the data to the Plot. We will cache a certain amount of points for you behind the scenes so that when you refresh the page the websocket reconnects and you get the last n points (where n is set by max-points in the API call).
This is why sending the heartbeat is important. We keep the source stream open and that in turn ensures that all the connected Clients keep their websockets open.
This isn't necessarily the most robust behaviour for a streaming platform to have and we will likely make it better in the future. For now though you will likely see better results by attempting to implement the code in your second example.
Hope that helped!
I am using Scapy to capture Wi-Fi client request frames. I am only interested in the MAC and requested SSID address of the clients. I do something like the following.
sniff(iface="mon0", prn=Handler)
def Handler(pkt):
if pkt.hasLayer(Dot11):
if pkt.type == 0 and pkt.subtype == 4:
print pkt.addr2 + " " + pkt.info
My issue is that I am doing this on a embedded device with limited processing power. When I run my script, my processor utilization rises to nearly 100%. I assume this is because of the sheer volume of frames that Scapy is sniffing and passing to my Python code. I also assume that if I could employ the right filter in my sniff command, I could eliminate many of the frames that are not being used and thus reduce the processor load.
Is there a filter statement that could be used to do this?
With Scapy it is possible to sniff with an BPF Filter applied to the capture. This will filter out packets at a much lower level then your Handler function is doing, and should significantly improve performance.
There is a simple example in the Scapy documentation.
# Could also add a subtype to this.
sniff(iface="mon0", prn=Handler, filter="type mgt")
Filter pieced together from here specifically.
Unfortunately I can't test this right now, but this information should provide you with a stepping stone to your ultimate solution, or someone else to post exactly what you need. I believe you will also need to set the interface to monitor mode.
You may also find this question of interest - Accessing 802.11 Wireless Management Frames from Python.
Scapy is extremely slow due to the way it decodes the data. You may
Use a BPF-filter on the input to only get the frames you are looking for before handing them to scapy. See this module for example. The module uses libpcap to get the data from the air or from a file first and passes it through a dynamically updated BPF-filter to keep unwanted traffic out
Write your own parser for wifi in c (which is not too hard, given the limited amount of information you need, there are things like prismhead though)
Use tshark from wireshark as a subprocess and collect data from there
I highly recommend the third approach although wiresharek comes with a >120mb library that your embedded device might not be able to handle.