How to set the parameters - photon index and metal abundance in the CIAO-sherpa tool for an x-ray source?
I am trying to analyze a bright x-ray source using the data extracted from Chandra archive. I have no idea about setting up the parameters for some models in sherpa tool.
There are several analysis examples describing how to do that on CIAO/Sherpa website. Check the list of fitting threads
https://cxc.harvard.edu/sherpa/threads/fitting.html
start with the first one on fitting PHA spectra:
https://cxc.harvard.edu/sherpa/threads/pha_intro/
Related
I am trying my best to not reinvent the wheel in order to automate my data collection for murine gastric emptying. Currently, members of our very small lab have to manually count net pellets per cage. I am hoping to find code that would be easily adapted for the following scenario:
Aerial view camera of a cage with a maximum of 3 mice. Bedding would be minimal and the color would be in stark contrast to pellets. Each mouse would be recognized as an individual. Each new pellet that appears would be labeled "object#" so that pellets are not counted twice. If mice are close together when a new pellet is produced, having the program guess or use probability to assign the pellet to a mouse in proximity would occur.
There are models for object detection. For example "Yolo" the problem is that you would have to train your model bc there is no generalized model where you can say that it should detect online your pellets. The training process itself is very simple since the process is built in the Yolo framework. The biggest amount of effort you will need to invest is in your training which you would need to manually label and convert into the format of your training framework. There are tools to simplify the labelling process but it is still an enormous amount of effort to gather good quality training data since you would need to label each frame individually.
I used this website to label my data and this framework is called darknet to train my models.
#NO ADVERTISEMENT
I'm currently working on a tkinter python school project where the sole purpose is to generate images from audio files, I'm going to pick audio properties and use them as values to generate unique abstract images from it, however I don't know which properties I can analyze to extract the values from. So I was looking for some guidance on which properties (audio frequency, amplitude... etc.) I can extract values from to use to generate the images with Python.
The question is very broad in it's current form.
(Bare in mind audio is not my area of expertise so do keep an eye out for the opinion of people working in audio/audiovisual/generative fields.)
You can go about it either way: figure out what kind of image(s) you'd like to create from audio and from there figure out which audio features to use. The other way around is also valid: pick an audio feature you'd like to explore, then think of how you'd best or most interestingly represent that visually.
There's a distintion between image and images.
For a single image, the simplest thing I can think of is drawing a grid of squares where a visual property of the square (e.g. square size, fill colour intensity, etc.) is mapped to the amplitude at that time. The single image would visualise a whole track's amplitude pattern. Even with such a simple example there are many choices you can make (how often you sample, how you layout the grid (cartesian, polar), how each amplitude sample is visualised (could different shapes, sizes, colours, etc.).
(Similar concept to CinemaRedux, simpler for audio only)
You can look into the field of data visualisation for inspiration.
Information is Beautiful is great place to start.
If you want to generate images that seems to go into the audiovisual territory (e.g. abstract animation, audio reactive motion graphics, etc.).
Your question originally had the tag Processing tag, which I removed, however you could be using Processing's Python Mode.
In ferms of audio visualisisation one good example I can think is Robert Hogin's work, see Magnetosphere and the Audio-generated landscape prototype. He is using frequency analysis (FFT) with a bit of smoothing/data massaging to amplify the elements useful for visualisation and dampen some of the noise:
(There are a few handy audio libraries such as Minim and beads, however I assume you're intresting in using raw Python, not Jython (which is what the official Processing Python mode uses). He is an answer on FFT analysis for visualisation (even though it's in Processing Java, the principles can be applied in Python)
Personally I've only used pyaudio so far for basic audio tasks. I would assume you could use it for amplitude analysis, but for other more complex tasks you might something extra.
Doing a quick search librosa pops up.
If what you want to achieve isn't clear, try prototyping first and start with the simplest audio analysis and visual elements you can think of (e.g. amplitude mapped to boxes over time). Constraints can be great for creativity and the minimal approach could translate into a cleaner, minimal visuals.
You can then look into FFT, MFCC, onset/ beat detection, etc.
Another tool that could be useful for prototyping is Sonic Visualiser.
You can open a track and use some of the built-in feature extractors.
(You can even get away with exporting XML or CSV data from Sonic Visualser which you can load/parse in Python and use to render image(s))
It uses a plugin system (similar to VST plugins in DAWs like Abbleton Live, Apple Logic, etc.) called Vamp plugins. You can then use the VampPy Python wrapper if you need the data at runtime.
(You might also want to draw inspiration from other languages used of audiovisual artworks like PureData + Gems , MaxMSP + Jitter, VVVV, etc.)
Time domain: Zero-crossing rate, Root mean square energy ,etc . Frequency Domain: Spectral bandwith,flux,rollof,flatness,MFCC etc. Also ,tempo, You can use librosa for Python , link : https://librosa.org/doc/latest/index.html for extraction from a .wav file , which implements Fast Fourier Transfrom and framing. And then you can apply some statistics such mean,standard deviation to the vector of the above characteristics across the whole audio file.
Providing an additional avenue for exploration: you have some tools to explore this qualitatively (as opposed to quantitatively using metrics derived from the audio signal as suggested in the great answers above)
As you mention the objective is to generate unique abstract images from sound - I would suggest an interesting angle may be to apply some Machine Learning techniques and derive some mood classification predictions from the source audio.
For instance you could use the Tensorflow models in essentia to predict the mood of the track and associate images you select with the mood scores generated. I would suggest going well beyond this and using the tkinter image creation tools to create your mappings to mood. Use pen and paper to develop your mapping strategy - are certain moods more angular or circular? What colour mappings will you select, and why? You have a great deal of freedom to create these mappings - so start simple as complexity builds naturally.
Using some some simple mood predictions may be more useful for you as someone who has more experience with the qualitative experience with sound rather than the quantitative experience as an audio engineer. I think this may be worth making central to the report you write and documenting your mapping decisions and design process for the report if this is a requirement of the task.
I'm trying to convert single images into it's depthmap, but I can't find any useful tutorial or documentation.
I'd like to use opencv, but if you know a way to get the depth map using for example tensorflow, I'd be glad to hear it.
There are numerous tutorials for stereo vision but I want to make it cheaper because it's for a project to help blind people.
I'm currently using esp32 cam to stream frame by frame and receiving the images on python using opencv.
Usually, we need a photometric measurement from a different position in the world to form a geometric understanding of the world(a.k.a depth map). For a single image, it is not possible to measure the geometric, but it is possible to infer depth from prior understanding.
One way for a single image to work is to use a deep learning-based method to direct infer depth. Usually, the deep learning-based approaches are all based on python, so if you only familiar with python, then this is the approach that you should go for. If the image is small enough, i think it is possible for realtime performance. There are many of this kind of work using CAFFE, TF, TORCH etc. you can search on git hub for more option. The one I posted here is what i used recently
reference:
Godard, Clément, et al. "Digging into self-supervised monocular depth estimation." Proceedings of the IEEE international conference on computer vision. 2019.
Source code: https://github.com/nianticlabs/monodepth2
The other way is to use a large FOV video for a single camera-based SLAM. This one has various constraints such as need good features, large FOV, slow motion, etc. You can find many of this work such as DTAM, LSDSLAM, DSO, etc. There are a couple of other packages from HKUST or ETH that does the mapping given the position(e.g if you have GPS/compass), some of the famous names are REMODE+SVO open_quadtree_mapping etc.
One typical example for a single camera-based SLAM would be LSDSLAM. It is a realtime SLAM.
This one is implemented based on ROS-C++, I remember they do publish the depth image. And you can write a python node to subscribe to the depth directly or the global optimized point cloud and project it into a depth map of any view angle.
reference: Engel, Jakob, Thomas Schöps, and Daniel Cremers. "LSD-SLAM: Large-scale direct monocular SLAM." European conference on computer vision. Springer, Cham, 2014.
source code: https://github.com/tum-vision/lsd_slam
I recently started to use tsfresh library to extract features from time-series data.
It's very cool that I can get the bag of features in few lines of code but I have doubt about the logic behind the select_features method. I looked into the official documents and googled it, but I couldn't find which algorithm is used for this. I want to know how it works, so that I can decide what to do on the feature selection phase after data processing in tsfresh.
According to that page in their documentation, what they do is:
they extract a whole set of features
they individually test the different features for significance (in a supervised setting, so the test is something like "is this feature useful to predict that output?") and keep the most significant ones using a procedure called the Benjamini-Yekutieli procedure
The references they provide should be of interest:
[1] Christ, M., Kempa-Liehr, A.W. and Feindt, M. (2016). Distributed and parallel time series feature extraction for industrial big data applications. ArXiv e-prints: 1610.07717 URL: http://adsabs.harvard.edu/abs/2016arXiv161007717C
[2] Benjamini, Y. and Yekutieli, D. (2001). The control of the false discovery rate in multiple testing under dependency. Annals of statistics, 1165–1188
where [1] is the paper describing tsfresh and [2] is the reference for the multiple testing procedure (called Benjamini-Yekutieli procedure above).
I have been working on a project to extract the building parameters . I used Lidar technology to extract those,but few days back I was going through google maps and just saw each and every building's 3D model correctly showcased in the app,just wonder which technology google is using and how can I use this to extract the building polygon and it's parameters ,would love it ,if someone tells me about the api which I can use to extract that data ?