Vector values in ICRF to calculate angular separation - python

I'm trying to get my head around the concept of the values in ICRF.
I wanted to calculate the angular separation between 2 or more objects with a known R.A and Dec.
Referring to this question, I need to provide the x,y,x vector value for the objects, in order to use separation_from method in Skyfield.
Unfortunately I'm not really sure on how to get the x,y,z vector value for each of the object.
I've been using pyephem's ephem.separation with no issues, but I'm still couldn't get it done in skyfield.
Thanks

Related

Python data structure to implement photon simulation

The Problem
I have a function, with 2 independent variables, using which I need to construct a 2-D grid, with parameters of my choice. These parameters will be the ranges of the two independent variables, smallest/largest cell size etc.
Visually this can be thought of as a space partition data structure whose geometry is provided by my function. To each cell, I will be then assigning some properties (that will also be determined by the function and other things).
My Idea
Once such data structure is prepared, I can simulate a photon (in a Monte-Carlo based approach) which will travel to a cell randomly (with some constraints, which are given by the function and cell properties), will be absorbed or scattered (re-emitted) from that cell with some probabilities (at which point I will be solving the radiative transfer equation in that cell). After this, the photon, if re-emitted, with its wavelength now different, moves to a neighbouring cell, and will keep moving till it escapes the grid (whose boundaries were decided by the parameters) or is completely absorbed in one of the cells. So, my data structure should also be in such a way that it can access the nearest neighboring cells efficiently from a computation point of view.
What I have done
I looked into scipy.spatial.kdtree but I am not sure I will be able to assign properties to cells the way I want (though if someone can explain that, it would be really helpful as it is very good with accessing nearest neighbours). I have also looked at othe tree-based algorithms but I am a bit lost at how to implement them. Numpy arrays are, at the end of the day, matrices, which does not suit my function (it leads to wastage of memory).
So any suggestions on data structures I can use (and some nudge towards how I can) will be really nice.

How do I apply a mean filter on a data set in Python?

I have a data set of 15,497 sets of values. The graph shows the raw data angle of pendulum vs. sample number which, obviously, looks awful. It should look like the second picture filtered data. A part of the assignment is introducing a mean filter to "smoothen" the data, making it look like the data on the 2nd graph. The data is put into np.array's in Python. But, using np.array's, I can't seem to figure out how to introduce a mean filter.
I'm interested in applying a mean filter on theta in the code screenshot of Python code, as theta are the values on the y axis on the plots. The code is added for you to easily see how the data file is introduced in the code.
There is a whole world of filtering techniques. There is a not a single unique 'mean filter'. Moreover, there are causal and non-causal filters (i.e. the difference between not using future values in the filter vs. using the future values in the filter.) I'm going to assume you are desiring a mean filter of size N, as that is pretty standard. Then, to apply this filter, convolve your 'theta' vector with a mean kernel.
I suggest printing the mean kernel and studying how it looks with different N. Then you may understand how it is averaging the values in the signal. I also urge you to think about why convolution is applying this filter to theta. I'll help you by telling you to think about the equivalent multiplication in the frequency domain. Also, investigate the different modes in the convolution function, as this may be more tailored for the specific solution you desire.
N=2
mean_kernel = np.ones(N)/N
filtered_sig = np.convolve(sig, mean_kernel, mode='same')

Finding Local Minimum In 1d array

Can you suggest a module function from numpy/scipy that can find local maxima/minima in a text file? I was trying to use the nearest neighbours approach, but the data fluctuations cause false identification. Is it possible to use the neighbour's approach but use 20 data points as the sample_len.
scipy.signal.argrelmax looks for relative maxima in an array (there is also argrelmin for minima). It has the order keyword argument which allows you to compare eg. 20 neighbours.

Scipy.optimize.minimize only iterates some variables.

I have written python (2.7.3) code wherein I aim to create a weighted sum of 16 data sets, and compare the result to some expected value. My problem is to find the weighting coefficients which will produce the best fit to the model. To do this, I have been experimenting with scipy's optimize.minimize routines, but have had mixed results.
Each of my individual data sets is stored as a 15x15 ndarray, so their weighted sum is also a 15x15 array. I define my own 'model' of what the sum should look like (also a 15x15 array), and quantify the goodness of fit between my result and the model using a basic least squares calculation.
R=np.sum(np.abs(model/np.max(model)-myresult)**2)
'myresult' is produced as a function of some set of parameters 'wts'. I want to find the set of parameters 'wts' which will minimise R.
To do so, I have been trying this:
res = minimize(get_best_weightings,wts,bounds=bnds,method='SLSQP',options={'disp':True,'eps':100})
Where my objective function is:
def get_best_weightings(wts):
wts_tr=wts[0:16]
wts_ti=wts[16:32]
for i,j in enumerate(portlist):
originalwtsr[j]=wts_tr[i]
originalwtsi[j]=wts_ti[i]
realwts=originalwtsr
imagwts=originalwtsi
myresult=make_weighted_beam(realwts,imagwts,1)
R=np.sum((np.abs(modelbeam/np.max(modelbeam)-myresult))**2)
return R
The input (wts) is an ndarray of shape (32,), and the output, R, is just some scalar, which should get smaller as my fit gets better. By my understanding, this is exactly the sort of problem ("Minimization of scalar function of one or more variables.") which scipy.optimize.minimize is designed to optimize (http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.optimize.minimize.html ).
However, when I run the code, although the optimization routine seems to iterate over different values of all the elements of wts, only a few of them seem to 'stick'. Ie, all but four of the values are returned with the same values as my initial guess. To illustrate, I plot the values of my initial guess for wts (in blue), and the optimized values in red. You can see that for most elements, the two lines overlap.
Image:
http://imgur.com/p1hQuz7
Changing just these few parameters is not enough to get a good answer, and I can't understand why the other parameters aren't also being optimised. I suspect that maybe I'm not understanding the nature of my minimization problem, so I'm hoping someone here can point out where I'm going wrong.
I have experimented with a variety of minimize's inbuilt methods (I am by no means committed to SLSQP, or certain that it's the most appropriate choice), and with a variety of 'step sizes' eps. The bounds I am using for my parameters are all (-4000,4000). I only have scipy version .11, so I haven't tested a basinhopping routine to get the global minimum (this needs .12). I have looked at minimize.brute, but haven't tried implementing it yet - thought I'd check if anyone can steer me in a better direction first.
Any advice appreciated! Sorry for the wall of text and the possibly (probably?) idiotic question. I can post more of my code if necessary, but it's pretty long and unpolished.

Classify into one of two sets (without learning)

I am dealing with a problem where I would like to automatically divide a set into two subsets, knowing that ALMOST ALL of the objects in the set A will have greater values in all of the dimensions than objects in the set B.
I know I could use machine learning but I need it to be fully automated, as in various instances of a problem objects of set A and set B will have different values (so values in set B of the problem instance 2 might be greater than values in set A of the problem instance 1!).
I imagine the solution could be something like finding objects which are the best representatives of those two sets (the density of the objects around them is the highest).
Finding N best representatives of both sets would be sufficient for me.
Does anyone know the name of the problem and/or could propose the implementation for that? (Python is preferable).
Cheers!
You could try some of the clustering methods, which belong to unsupervised machine learning. The result depends on your data and how distributed they are. According to your picture I think K-means algorithm could work. There is a python library for machine learning scikit-learn, which already contains k-means implementation: http://scikit-learn.org/stable/modules/clustering.html#k-means
If your data is as easy as you explained, then there are some rather obvious approaches.
Center and count:
Center your data set, and count for each object how many values are positive. If more values are positive than negative, it will likely be in the red class.
Length histogram:
Compute the sum of each vector. Make a histogram of values. Split at the largest gap, vectors longer than the threshold are in one group, the others in the lower group.
I have made an ipython notebook to demonstrate this approach available.

Categories