pyamiibo vs amiiboapi.com to lookup amiibo bin dump file - python

I am trying to use pyamiibo from https://pypi.org/project/pyamiibo/ to read my amiibo bin dump file in python, and attempting to use amiiboapi.com to look up the details...
For Duck Hunt, pyamiibo's uid_hex returns "04 FC 30 82 03 49 80", but amiiboapi.com returns { "head": "07820000", "tail": "002f0002",}...
What should I do to link up the 2 outputs?

The UID is unique to each tag, the head and tail are in pages 21 and 22 respectively

Related

convert string to hexadecimal in R

I am trying to convert a string such as
str <- 'KgHDRZ3N'
to hexadecimal. The function as.hexmode() returns an error
Error in as.hexmode("KgHDRZ3N") :
'x' cannot be coerced to class "hexmode"
are there any other functions I can use to achieve this?
I am trying to replicate what .encode() in python does.
You can use charToRaw
charToRaw(str)
#> [1] 4b 67 48 44 52 5a 33 4e
Or if you want a character vector
as.hexmode(as.numeric(charToRaw(str)))
#> [1] "4b" "67" "48" "44" "52" "5a" "33" "4e"
or if you want the "0x" prefix:
paste0("0x", as.hexmode(as.numeric(charToRaw(str))))
#> [1] "0x4b" "0x67" "0x48" "0x44" "0x52" "0x5a" "0x33" "0x4e"

How to use Python to extract packet information from hexdump?

I am working on a network project where I have hexdump of packets and I need to dissect that hexdump packet to display all the packet information.
Example : Packet Information Fields -
source address
destination address
encapsulation type
frame number
frame length
FCS
data
epoch time
We are not supposed to use file as input and output because it will increase memory utilization and might crash server.
I tried with below code but this doesn't work in my case:
# imports scapy Utility
from scapy.all import *
from scapy.utils import *
# variable to store hexdump
hexdump = '0000 49 47 87 95 4a 30 9e 9c f7 09 70 7f....'
# Initialize a 802.11 structure from raw bytes
packet = Dot11(bytearray.fromhex(hexdump))
#scapy function to view the info
packet.summary()
packet.show()
Are there any suggestions to achieve this as I am new to this technology? I might lack some proper directions to solve this.
Your hexdump value seem to include the index. Remove it:
On Scapy >= 2.4rc:
data = hex_bytes('49 47 87 95 4a 30 9e 9c f7 09 70 7f [...]'.replace(' ', ''))
Or with Python 2 and Scapy <= 2.3.3:
data = '49 47 87 95 4a 30 9e 9c f7 09 70 7f [...]'.replace(' ', '').decode('hex')
Then:
Dot11(data)

get the sequenced reads present 300 base near to restriction sites

I have many sam file containing sequenced reads and i want to get the reads which are present near to restriction sequence at least 300 bases in both direction.
1st File containing the position of restrion sites, having two columns.
chr01 4957
chr01 6605
chr02 19968
chr02 21055
chr02 208555
chr03 243398
2nd file having the reads in SAM file format. (almost 2.6M lines)
id1995 147 chr03 119509969 42 85M
id1999 83 chr10 131560619 26 81M
id1999 163 chr10 131560429 26 85M
id2099 73 chr10 60627850 42 81M
Now I want to compare column 3 of sam file with column one of position file and column 4 of sam file with column 2 of position file.
I tried doing in R language but because the data is large it is taking lot of time to do.
if you can improve the R script to do work faster by implementing best algorithm.
R code:
pos = read.csv(file="sites.csv",header=F,sep="\t")
fastq = read.csv(file="reads.sam", header=F,sep="\t")
newFastq = data.frame(fastq)
newFastq = NULL
trim <- function (x) gsub("^\\s+|\\s+$", "", x)
for(i in 1:nrow(fastq)){
for(j in 1:nrow(pos)){
if(as.character(fastq[i,3]) == trim(as.character(pos[j,1]))){
if(fastq[i,4] - pos[j,2] < 300 && fastq[i,4] - pos[j,2] > -300){
newFastq = rbind(newFastq,fastq[i,])
}
}
}
}
#Write data into file
write.table(newFastq, file = "sitesFound.csv",row.names=FALSE, na="",quote=FALSE,col.names=FALSE, sep="\t")
or can you improve this code by writing in perl.
One overall strategy is to make indexed bam files using Bioconductor Rsamtools asBam() and indexBam(). Read your first file into a data.frame and construct a GenomicRanges GRanges() object. Finally, use GenomicAlignments readGAlignments() to read the bam file, using the GRanges() as the which= argument to ScanBamParam(). The Bioconductor support site https://support.bioconductor.org is more appropriate for Bioconductor questions, if you decide to go this route.
It looks like you want reads that are within +/- 300 base pairs of your GRanges object. Resize the GRanges
library(GenomicRanges)
## create gr = GRanges(...)
gr = resize(gr, width = 600, fix="center")
use this as the which= in ScanBamParam(), and read your BAM file
library(GenomicAlignments)
param = ScanBamParam(which = gr)
reads = readGAlignments("your.bam", param = param)
Use what= to control fields read from the BAM file, e.g.
param = ScanBamParam(which = gr, what = "seq")

How to send NRPN messages with pygame midi

I'm writing a program that will read incoming cc messages from a device that can only send cc's and send it to another device as an nrpn message. I know how to send cc messages from pygame but I can't wrap my mind around how to send nrpn's. I looked at the Vriareon code and I don't see anywhere in there were it even accesses midi. Can anyone give an example of how this is done?
Thank you!
NRPN messages are CC messages.
However, NRPN numbers are distinct from CC numbers. The MIDI specification says:
Controller number 6 (Data Entry), in conjunction with Controller numbers 96 (Data Increment), 97 (Data Decrement), 98 (Non-Registered Parameter Number LSB), 99 (Non-Registered Parameter Number MSB), 100 (Registered Parameter Number LSB), and 101 (Registered Parameter Number MSB), extend the number of controllers available via MIDI. Parameter data is transferred by first selecting the parameter number to be edited using controllers 98 and 99 or 100 and 101, and then adjusting the data value for that parameter using controller number 6, 96, or 97.
To change a controller like volume (7), you would send a single message:
B0 07 xx
To change an NRPN, you would select the NRPN first:
B0 63 mm
B0 62 ll
And then change the currently selected NRPN with the data entry controller:
B0 06 mm
B0 26 ll (optional, for 14-bit values)
So setting NRPN 0:1 to value 42 could be done with:
self.midi_out.write_short(0xb0, 0x63, 0)
self.midi_out.write_short(0xb0, 0x62, 1)
self.midi_out.write_short(0xb0, 0x06, 42)

MapReduce output not the complete set expected?

I'm running a streaming hadoop job on a single hadoop pseudo-distributed node in python, also using hadoop-lzo to produce splits on a .lzo compressed input file.
Everything works as expected when using small compressed or uncompressed test datasets; MapReduce output matches that from a simple 'cat | map | sort | reduce' pipeline in unix. - whether the input is compressed or not.
However, once I move to processing the single large .lzo (pre-indexed) dataset (~40GB compressed) and the job is split to multiple mappers, the output looks to be truncated - only the first few key values are present.
The code + outputs follow - as you can see, it's a very simple count for testing the whole process.
output from straight forward unix pipeline on test data (subset of large dataset);
lzop -cd objectdata_input.lzo | ./objectdata_map.py | sort | ./objectdata_red.py
3656 3
3671 3
51 6
output from hadoop job on test data (same test data as above)
hadoop jar $HADOOP_INSTALL/contrib/streaming/hadoop-streaming-*.jar -input objectdata_input.lzo -inputformat com.hadoop.mapred.DeprecatedLzoTextInputFormat -output retention_counts -mapper objectdata_map.py -reducer objectdata_red.py -file /home/bob/python-dev/objectdata_map.py -file /home/bob/python-dev/objectdata_red.py
3656 3
3671 3
51 6
Now, the test data is a small subset of lines from the real dataset, so I would at least expect to see the keys from above in the resulting output when the job is run against the full dataset. However, what I get is;
hadoop jar $HADOOP_INSTALL/contrib/streaming/hadoop-streaming-*.jar -input objectdata_input_full.lzo -inputformat com.hadoop.mapred.DeprecatedLzoTextInputFormat -output retention_counts -mapper objectdata_map.py -reducer objectdata_red.py -file /home/bob/python-dev/objectdata_map.py -file /home/bob/python-dev/objectdata_red.py
1 40475582
12 48874
14 8929777
15 219984
16 161340
17 793211
18 78862
19 47561
2 14279960
20 56399
21 3360
22 944639
23 1384073
24 956886
25 9667
26 51542
27 2796
28 336767
29 840
3 3874316
30 1776
33 1448
34 12144
35 1872
36 1919
37 2035
38 291
39 422
4 539750
40 1820
41 1627
42 97678
43 67581
44 11009
45 938
46 849
47 375
48 876
49 671
5 262848
50 5674
51 90
6 6459687
7 4711612
8 20505097
9 135592
...There are many less keys than I would expect based on the dataset.
I'm less bothered by the key's themselves - this set could be expected given the input dataset, I am more concerned that there should be many many more keys, in the thousands. When I run the code in a unix pipeline against the first 25million records in the dataset, I get keys in the range approx 1 - 7000.
So, this output appears to be just the first few lines of what I would actually expect, and I'm not sure why. Am I missing collating many part-0000# files? or something similar? this is just a single-node pseudo-distributed hadoop I am testing on at home, so if there are more part-# files to collect I have no idea where they could be; they do not show up in the retention_counts dir in HDFS.
The mapper and reducer code is as follows - effectivley the same as the many word-count examples floating about;
objectdata_map.py
#!/usr/bin/env python
import sys
RETENTION_DAYS=(8321, 8335)
for line in sys.stdin:
line=line.strip()
try:
retention_days=int(line[RETENTION_DAYS[0]:RETENTION_DAYS[1]])
print "%s\t%s" % (retention_days,1)
except:
continue
objectdata_red.py
#!/usr/bin/env python
import sys
last_key=None
key_count=0
for line in sys.stdin:
key=line.split('\t')[0]
if last_key and last_key!=key:
print "%s\t%s" % (last_key,key_count)
key_count=1
else:
key_count+=1
last_key=key
print "%s\t%s" % (last_key,key_count)
This is all on a manually installed hadoop 1.1.2, pseudo-distributed mode, with hadoop-lzo built and installed from
https://github.com/kevinweil/hadoop-lzo

Categories