Related
I have following graph
I am using python's scipy.signal.find_peaks to find the peaks. But I am not sure how do I do that. I did following :
per = np.percentile(x,[70])
peaks_control = findPeaks(x, per[0])
where x is the signal
array([1.07541259e+09, 1.13851049e+09, 1.19241492e+09, 1.23706527e+09,
1.27240093e+09, 1.29836131e+09, 1.31217483e+09, 1.32037296e+09,
1.31908858e+09, 1.30896503e+09, 1.29216550e+09, 1.26958042e+09,
1.24561632e+09, 1.21202121e+09, 1.16869371e+09, 1.11054499e+09,
1.04006154e+09, 9.65663403e+08, 8.87706760e+08, 8.09340093e+08,
7.37568765e+08, 6.79736364e+08, 6.38576457e+08, 6.06062937e+08,
5.80650350e+08, 5.55089744e+08, 5.36334499e+08, 5.20236597e+08,
5.06529837e+08, 4.91825175e+08, 4.77937063e+08, 4.65475058e+08,
4.56520513e+08, 4.48393240e+08, 4.41944988e+08, 4.34822844e+08,
4.33688578e+08, 4.33451049e+08, 4.36256177e+08, 4.33553613e+08,
4.29191142e+08, 4.28492541e+08, 4.24465967e+08, 4.20074825e+08,
4.19935897e+08, 4.16652681e+08, 4.12419580e+08, 4.11747552e+08,
4.08801166e+08, 4.02351981e+08, 3.99620513e+08, 3.98716550e+08,
3.46023077e+08, 3.53969464e+08, 4.17131235e+08, 5.19363869e+08,
6.50956410e+08, 8.01530303e+08, 9.50162937e+08, 1.08249790e+09,
1.18242378e+09, 1.22732168e+09, 1.20123077e+09, 1.21067599e+09,
1.21556410e+09, 1.21272261e+09, 1.20310023e+09, 1.18692774e+09,
1.16694033e+09, 1.14330117e+09, 1.11635338e+09, 1.07947529e+09,
1.03222145e+09, 9.73427972e+08, 9.08558974e+08, 8.39966200e+08,
7.70457343e+08, 7.04976224e+08, 6.49436131e+08, 6.02085548e+08,
5.68915385e+08, 5.41638928e+08, 5.18758741e+08, 5.01973660e+08,
4.88766667e+08, 4.77643823e+08, 4.65681818e+08, 4.56193240e+08,
4.46851515e+08, 4.36135198e+08, 4.32282984e+08, 4.27913520e+08,
4.23408625e+08, 4.24119580e+08, 4.22399068e+08, 4.22415385e+08,
4.20193939e+08, 4.17638462e+08, 4.14822378e+08, 4.10636364e+08,
4.08388345e+08, 4.04844522e+08, 4.00571562e+08, 4.00841026e+08,
4.00764802e+08, 4.00432867e+08, 4.00336364e+08, 4.00724709e+08,
4.03048019e+08, 3.57437995e+08, 3.62371096e+08, 4.16658741e+08,
5.10148019e+08, 6.31750117e+08, 7.65175991e+08, 8.96832168e+08,
1.01666597e+09, 1.10373263e+09, 1.14380816e+09, 1.11629790e+09,
1.12228904e+09, 1.12378788e+09, 1.11974825e+09, 1.10812774e+09,
1.09125035e+09, 1.07033566e+09, 1.04667389e+09, 1.02016830e+09,
9.86036830e+08, 9.42176457e+08, 8.88900233e+08, 8.27962005e+08,
7.64362238e+08, 7.00755245e+08, 6.42390909e+08, 5.92395338e+08,
5.52426107e+08, 5.26319114e+08, 5.03317249e+08, 4.85524942e+08,
4.70421911e+08, 4.59389510e+08, 4.51644988e+08, 4.46288578e+08,
4.41076923e+08, 4.37533566e+08, 4.31993007e+08, 4.28625641e+08,
4.25406294e+08, 4.21161538e+08, 4.19049650e+08, 4.16719347e+08,
4.13124242e+08, 4.08404429e+08, 4.06154545e+08, 4.03386014e+08,
4.00980420e+08, 3.99442657e+08, 3.97792075e+08, 3.95606527e+08,
3.97922378e+08, 3.98345221e+08, 3.96253613e+08, 3.95703030e+08,
3.96108392e+08, 3.67136830e+08, 3.58382051e+08, 3.95844289e+08,
4.70853846e+08, 5.76629837e+08, 6.97682284e+08, 8.21169930e+08,
9.32588112e+08, 1.01885804e+09, 1.06315152e+09, 1.05128159e+09,
1.03944545e+09, 1.03769580e+09, 1.03132145e+09, 1.02008601e+09,
1.00327389e+09, 9.85387646e+08, 9.66403030e+08, 9.44620746e+08,
9.18596737e+08, 8.82269697e+08, 8.37750816e+08, 7.84877156e+08,
7.27590443e+08, 6.70183217e+08, 6.14567832e+08, 5.67404895e+08,
5.30862471e+08, 5.03108625e+08, 4.84348718e+08, 4.68116550e+08,
4.55809907e+08, 4.46616783e+08, 4.39725175e+08, 4.34323077e+08])
The peaks I get are adjacent to each other, as I can see that there is little bump in second, third and forth peak sites.
How should I calculate it and ignore such adjacent ones. To calculate the width, prominecne, etc I need to calculate the peaks. If I know it already I might be able to put some threshold.
As you asked in the comments, I'll provide you an example. Please note, this is only an example, an Exploratory Data Analysis is always needed to choose the best way to reach your goal.
So, let's create some noisy data
import numpy as np
from scipy.signal import find_peaks, periodogram
import matplotlib.pyplot as plt
size = 100
a = np.linspace(1, .5, size)
x = np.linspace(0, 50, size)
np.random.seed(0)
y = a * np.sin(x) + np.random.normal(0, .1, size) + 5
now, let's try to find the peaks with find_peaks from scipy.signal
peaks = find_peaks(y)[0]
plt.plot(x, y)
plt.plot(x[peaks], y[peaks], marker='o', ls='none')
plt.show()
as you can see, there are some "wrong" peaks. We need to set distance argument in find_peaks (see documentation).
Let us suppose that we don't know the distance between the peaks. In this case, we can see that the data are periodic. So we can find the period with a periodogram and use the period as distance in find_peaks
_f, _p = periodogram(y, nfft=2**6)
# calculate the sample rate of x
sample_rate = 1 / np.median(np.diff(x))
periods = 1 / _f[1:] / sample_rate
density = _p[1:] / _p[1:].max()
max_density_idx = density.argmax()
period = periods[max_density_idx]
plt.semilogx(periods, density)
plt.scatter(period, density[max_density_idx], color='r')
plt.title(f"period {period:.2f}")
plt.show()
Now we can use the period as distance argument in find_peaks
peaks = find_peaks(y, distance=period)[0]
plt.plot(x, y)
plt.plot(x[peaks], y[peaks], marker='o', ls='none')
plt.show()
update
In your specific case, it's a little bit different.
Define the signal (I'll call variables X and Y)
Y = np.array([1.07541259e+09, 1.13851049e+09, 1.19241492e+09, 1.23706527e+09, 1.27240093e+09, 1.29836131e+09, 1.31217483e+09, 1.32037296e+09, 1.31908858e+09, 1.30896503e+09, 1.29216550e+09, 1.26958042e+09, 1.24561632e+09, 1.21202121e+09, 1.16869371e+09, 1.11054499e+09, 1.04006154e+09, 9.65663403e+08, 8.87706760e+08, 8.09340093e+08, 7.37568765e+08, 6.79736364e+08, 6.38576457e+08, 6.06062937e+08, 5.80650350e+08, 5.55089744e+08, 5.36334499e+08, 5.20236597e+08, 5.06529837e+08, 4.91825175e+08, 4.77937063e+08, 4.65475058e+08, 4.56520513e+08, 4.48393240e+08, 4.41944988e+08, 4.34822844e+08, 4.33688578e+08, 4.33451049e+08, 4.36256177e+08, 4.33553613e+08, 4.29191142e+08, 4.28492541e+08, 4.24465967e+08, 4.20074825e+08, 4.19935897e+08, 4.16652681e+08, 4.12419580e+08, 4.11747552e+08, 4.08801166e+08, 4.02351981e+08, 3.99620513e+08, 3.98716550e+08, 3.46023077e+08, 3.53969464e+08, 4.17131235e+08, 5.19363869e+08, 6.50956410e+08, 8.01530303e+08, 9.50162937e+08, 1.08249790e+09, 1.18242378e+09, 1.22732168e+09, 1.20123077e+09, 1.21067599e+09, 1.21556410e+09, 1.21272261e+09, 1.20310023e+09, 1.18692774e+09, 1.16694033e+09, 1.14330117e+09, 1.11635338e+09, 1.07947529e+09, 1.03222145e+09, 9.73427972e+08, 9.08558974e+08, 8.39966200e+08, 7.70457343e+08, 7.04976224e+08, 6.49436131e+08, 6.02085548e+08, 5.68915385e+08, 5.41638928e+08, 5.18758741e+08, 5.01973660e+08, 4.88766667e+08, 4.77643823e+08, 4.65681818e+08, 4.56193240e+08, 4.46851515e+08, 4.36135198e+08, 4.32282984e+08, 4.27913520e+08, 4.23408625e+08, 4.24119580e+08, 4.22399068e+08, 4.22415385e+08, 4.20193939e+08, 4.17638462e+08, 4.14822378e+08, 4.10636364e+08, 4.08388345e+08, 4.04844522e+08, 4.00571562e+08, 4.00841026e+08, 4.00764802e+08, 4.00432867e+08, 4.00336364e+08, 4.00724709e+08, 4.03048019e+08, 3.57437995e+08, 3.62371096e+08, 4.16658741e+08, 5.10148019e+08, 6.31750117e+08, 7.65175991e+08, 8.96832168e+08, 1.01666597e+09, 1.10373263e+09, 1.14380816e+09, 1.11629790e+09, 1.12228904e+09, 1.12378788e+09, 1.11974825e+09, 1.10812774e+09, 1.09125035e+09, 1.07033566e+09, 1.04667389e+09, 1.02016830e+09, 9.86036830e+08, 9.42176457e+08, 8.88900233e+08, 8.27962005e+08, 7.64362238e+08, 7.00755245e+08, 6.42390909e+08, 5.92395338e+08, 5.52426107e+08, 5.26319114e+08, 5.03317249e+08, 4.85524942e+08, 4.70421911e+08, 4.59389510e+08, 4.51644988e+08, 4.46288578e+08, 4.41076923e+08, 4.37533566e+08, 4.31993007e+08, 4.28625641e+08, 4.25406294e+08, 4.21161538e+08, 4.19049650e+08, 4.16719347e+08, 4.13124242e+08, 4.08404429e+08, 4.06154545e+08, 4.03386014e+08, 4.00980420e+08, 3.99442657e+08, 3.97792075e+08, 3.95606527e+08, 3.97922378e+08, 3.98345221e+08, 3.96253613e+08, 3.95703030e+08, 3.96108392e+08, 3.67136830e+08, 3.58382051e+08, 3.95844289e+08, 4.70853846e+08, 5.76629837e+08, 6.97682284e+08, 8.21169930e+08, 9.32588112e+08, 1.01885804e+09, 1.06315152e+09, 1.05128159e+09, 1.03944545e+09, 1.03769580e+09, 1.03132145e+09, 1.02008601e+09, 1.00327389e+09, 9.85387646e+08, 9.66403030e+08, 9.44620746e+08, 9.18596737e+08, 8.82269697e+08, 8.37750816e+08, 7.84877156e+08, 7.27590443e+08, 6.70183217e+08, 6.14567832e+08, 5.67404895e+08, 5.30862471e+08, 5.03108625e+08, 4.84348718e+08, 4.68116550e+08, 4.55809907e+08, 4.46616783e+08, 4.39725175e+08, 4.34323077e+08])
X = np.arange(Y.size)
Since Y.size is 200 and in your plot there are 200 secs, I assume that the sample rate is 1 sec.
If we search for the peaks with default distance we find a lot of unwanted peaks
peaks = find_peaks(Y)[0]
plt.plot(X, Y)
plt.plot(X[peaks], Y[peaks], marker='o', ls='none')
plt.show()
Let's do a periodogram
_f, _p = periodogram(Y, nfft=2**12)
# the sample rate of your signal
sample_rate = 1
periods = 1 / _f[1:] / sample_rate
density = _p[1:] / _p[1:].max()
max_density_idx = density.argmax()
period = periods[max_density_idx]
p_peaks_idx = find_peaks(density)[0]
plt.semilogx(periods, density)
plt.scatter(period, density[max_density_idx], color='r')
period_peaks = []
for p_peak in p_peaks_idx:
if density[p_peak] < .1:
continue
period_peaks.append(periods[p_peak])
plt.scatter(periods[p_peak], density[p_peak])
plt.text(periods[p_peak], density[p_peak], f"{periods[p_peak]:.1f} ", ha='right', va='center')
plt.title('periodogram')
plt.show()
We found two main periods
period_peaks
[56.888888888888886, 28.444444444444443]
If we use the higher density period (56.9, the fundamental, or 1st harmonic) we miss a peak
peaks = find_peaks(Y, distance=period_peaks[0])[0]
plt.plot(X, Y)
plt.plot(X[peaks], Y[peaks], marker='o', ls='none')
plt.show()
This could be because
you have too few observations
the periodicity is not constant
If we empirically subtract a quantity (say 10) from the period, we find all the peaks
peaks = find_peaks(Y, distance=period_peaks[0] - 10)[0]
plt.plot(X, Y)
plt.plot(X[peaks], Y[peaks], marker='o', ls='none')
plt.show()
So we got peaks at
X[peaks]
array([ 7, 61, 118, 174])
taking the difference, we see they're not regular (at this sample rate and with these few observations)
np.diff(X[peaks])
array([54, 57, 56])
I have a file with 3 columns of data: Zenith (Z, from 0 to 90°) and Azimuth (A, from 0 to 360°). And radiance as the color variable.
I need to use python with matplotlib to plot this data into something resembling this:
This is my code so far (it returns an error):
import matplotlib.pyplot as plt
import numpy as np
# `data` has the following shape:
# [
# [Zenith value going from 0 to 90],
# [Azimuth values (0 to 365) increasing by 1 and looping back after 365],
# [radiance: floats that need to be mapped by the color value]
#]
data = [[6.000e+00 1.200e+01 1.700e+01 2.300e+01 2.800e+01 3.400e+01 3.900e+01
4.500e+01 5.000e+01 5.600e+01 6.200e+01 6.700e+01 7.300e+01 7.800e+01
8.400e+01 8.900e+01 3.934e+01 4.004e+01 4.054e+01 4.114e+01 4.154e+01
4.204e+01 4.254e+01 4.294e+01 4.334e+01 4.374e+01 4.414e+01 4.454e+01
4.494e+01 4.534e+01 4.564e+01 4.604e+01 4.644e+01 4.684e+01 4.714e+01
4.754e+01 4.794e+01 4.824e+01 4.864e+01 4.904e+01 4.944e+01 4.984e+01
5.014e+01 5.054e+01 5.094e+01 5.134e+01 5.174e+01 5.214e+01 5.264e+01
5.304e+01 5.344e+01 5.394e+01 5.444e+01 5.494e+01 5.544e+01 5.604e+01
5.674e+01 5.764e+01]
[1.960e+02 3.600e+01 2.360e+02 7.600e+01 2.760e+02 1.160e+02 3.160e+02
1.560e+02 3.560e+02 1.960e+02 3.600e+01 2.360e+02 7.600e+01 2.760e+02
1.160e+02 3.160e+02 6.500e+00 3.400e+00 3.588e+02 2.500e+00 3.594e+02
3.509e+02 5.000e-01 6.900e+00 1.090e+01 3.478e+02 1.250e+01 1.050e+01
7.300e+00 2.700e+00 3.571e+02 3.507e+02 1.060e+01 3.200e+00 3.556e+02
3.480e+02 7.300e+00 3.597e+02 3.527e+02 1.260e+01 6.600e+00 1.200e+00
3.570e+02 3.538e+02 3.520e+02 3.516e+02 3.528e+02 3.560e+02 1.200e+00
8.800e+00 3.567e+02 1.030e+01 6.800e+00 8.300e+00 3.583e+02 3.581e+02
3.568e+02 3.589e+02]
[3.580e-04 6.100e-04 3.220e-04 4.850e-04 4.360e-04 2.910e-04 1.120e-03
2.320e-04 4.300e-03 2.680e-04 1.700e-03 3.790e-04 7.460e-04 8.190e-04
1.030e-03 3.650e-03 3.050e-03 3.240e-03 3.340e-03 3.410e-03 3.490e-03
3.290e-03 3.630e-03 3.510e-03 3.320e-03 3.270e-03 3.280e-03 3.470e-03
3.720e-03 3.960e-03 3.980e-03 3.700e-03 3.630e-03 4.100e-03 4.080e-03
3.600e-03 3.990e-03 4.530e-03 4.040e-03 3.630e-03 4.130e-03 4.370e-03
4.340e-03 4.210e-03 4.100e-03 4.090e-03 4.190e-03 4.380e-03 4.460e-03
4.080e-03 4.420e-03 3.960e-03 4.230e-03 4.120e-03 4.440e-03 4.420e-03
4.370e-03 4.380e-03]]
rad = data[0]
azm = data[1]
# From what I understand, I need to create a meshgrid from the zenith and azimuth values
r, th = np.meshgrid(rad, azm)
z = data[2] # This doesn't work as `pcolormesh` expects this to be a 2d array
plt.subplot(projection="polar")
plt.pcolormesh(th, r, z, shading="auto")
plt.plot(azm, r, color="k", ls="none")
plt.show()
Note: my actual data goes on for 56k lines and looks like this (Ignore the 4th column):
The example data above is my attempt to reduce the resolution of this massive file, so I only used 1/500 of the lines of data. This might be the wrong way to reduce the resolution, please correct me if it is!
Every tutorial I've seen generate the z value from the r array generated by meshgrid. This is leaving me confused about how I would convert my z column into a 2d array that would properly map to the zenith and azimuth values.
They'll use something like this:
z = (r ** 2.0) / 4.0
So, taking the exact shape of r and applying a transformation to create the color.
The solution was in the data file all along. I needed to better understand what np.meshrid actually did. Turns out the data already is a 2d array, it just needed to be reshaped. I also found a flaw in the file, fixing it reduced its lines from 56k to 15k. This was small enough that I did not need to reduce the resolution.
Here's how I reshaped my data, and what the solution looked like:
import matplotlib.pyplot as plt
import numpy as np
with open("data.txt") as f:
lines = np.array(
[
[float(n) for n in line.split("\t")]
for i, line in enumerate(f.read().splitlines())
]
)
data = [np.reshape(a, (89, 180)) for a in lines.T]
rad = np.radians(data[1])
azm = data[0]
z = data[2]
plt.subplot(projection="polar")
plt.pcolormesh(rad, azm, z, cmap="coolwarm", shading="auto")
plt.colorbar()
plt.show()
The simplest way to plot the given data is with a polar scatter plot.
Using blue for low values and red for high values, it could look like:
import matplotlib.pyplot as plt
import numpy as np
data = [[6.000e+00, 1.200e+01, 1.700e+01, 2.300e+01, 2.800e+01, 3.400e+01, 3.900e+01, 4.500e+01, 5.000e+01, 5.600e+01, 6.200e+01, 6.700e+01, 7.300e+01, 7.800e+01, 8.400e+01, 8.900e+01, 3.934e+01, 4.004e+01, 4.054e+01, 4.114e+01, 4.154e+01, 4.204e+01, 4.254e+01, 4.294e+01, 4.334e+01, 4.374e+01, 4.414e+01, 4.454e+01, 4.494e+01, 4.534e+01, 4.564e+01, 4.604e+01, 4.644e+01, 4.684e+01, 4.714e+01, 4.754e+01, 4.794e+01, 4.824e+01, 4.864e+01, 4.904e+01, 4.944e+01, 4.984e+01, 5.014e+01, 5.054e+01, 5.094e+01, 5.134e+01, 5.174e+01, 5.214e+01, 5.264e+01, 5.304e+01, 5.344e+01, 5.394e+01, 5.444e+01, 5.494e+01, 5.544e+01, 5.604e+01, 5.674e+01, 5.764e+01],
[1.960e+02, 3.600e+01, 2.360e+02, 7.600e+01, 2.760e+02, 1.160e+02, 3.160e+02, 1.560e+02, 3.560e+02, 1.960e+02, 3.600e+01, 2.360e+02, 7.600e+01, 2.760e+02, 1.160e+02, 3.160e+02, 6.500e+00, 3.400e+00, 3.588e+02, 2.500e+00, 3.594e+02, 3.509e+02, 5.000e-01, 6.900e+00, 1.090e+01, 3.478e+02, 1.250e+01, 1.050e+01, 7.300e+00, 2.700e+00, 3.571e+02, 3.507e+02, 1.060e+01, 3.200e+00, 3.556e+02, 3.480e+02, 7.300e+00, 3.597e+02, 3.527e+02, 1.260e+01, 6.600e+00, 1.200e+00, 3.570e+02, 3.538e+02, 3.520e+02, 3.516e+02, 3.528e+02, 3.560e+02, 1.200e+00, 8.800e+00, 3.567e+02, 1.030e+01, 6.800e+00, 8.300e+00, 3.583e+02, 3.581e+02, 3.568e+02, 3.589e+02],
[3.580e-04, 6.100e-04, 3.220e-04, 4.850e-04, 4.360e-04, 2.910e-04, 1.120e-03, 2.320e-04, 4.300e-03, 2.680e-04, 1.700e-03, 3.790e-04, 7.460e-04, 8.190e-04, 1.030e-03, 3.650e-03, 3.050e-03, 3.240e-03, 3.340e-03, 3.410e-03, 3.490e-03, 3.290e-03, 3.630e-03, 3.510e-03, 3.320e-03, 3.270e-03, 3.280e-03, 3.470e-03, 3.720e-03, 3.960e-03, 3.980e-03, 3.700e-03, 3.630e-03, 4.100e-03, 4.080e-03, 3.600e-03, 3.990e-03, 4.530e-03, 4.040e-03, 3.630e-03, 4.130e-03, 4.370e-03, 4.340e-03, 4.210e-03, 4.100e-03, 4.090e-03, 4.190e-03, 4.380e-03, 4.460e-03, 4.080e-03, 4.420e-03, 3.960e-03, 4.230e-03, 4.120e-03, 4.440e-03, 4.420e-03, 4.370e-03, 4.380e-03]]
rad = np.radians(data[1])
azm = data[0]
z = data[2]
plt.subplot(projection="polar")
plt.scatter(rad, azm, c=z, cmap='coolwarm')
plt.colorbar()
plt.show()
Creating such a scatter plot with your real data gives an idea how it looks like. You might want to choose a different colormap, depending on what you want to convey. You also can choose a smaller dot size (for example plt.scatter(rad, azm, c=z, cmap='plasma', s=1, ec='none')) if there would be too many points.
A simple way to create a filled image from non-gridded data uses tricontourf with 256 colors (it looks quite dull with the given data, so I didn't add an example plot):
plt.subplot(projection="polar")
plt.tricontourf(rad, azm, z, levels=256, cmap='coolwarm')
I try to generate a graph and save an image of the graph in python. Although the "plotting" of the values seems ok and I can get my picture, the scale of the graph is badly shifted.
If you compare the correct graph from tutorial example with my bad graph generated from different dataset, the curves are cut at the bottom to early: Y-axis should start just above the highest values and I should also see the curves for the highest X-values (in my case around 10^3).
But honestly, I think that problem is the scale of the y-axis, but actually do not know what parameteres should I change to fix it. I tried to play with some numbers (see below script), but without any good results.
This is the code for calculation and generation of the graph image:
import numpy as np
hic_data = load_hic_data_from_reads('/home/besy/Hi-C/MOREX/TCC35_parsedV2/TCC35_V2_interaction_filtered.tsv', resolution=100000)
min_diff = 1
max_diff = 500
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(12, 12))
for cnum, c in enumerate(hic_data.chromosomes):
if c in ['ChrUn']:
continue
dist_intr = []
for diff in xrange(min_diff, min((max_diff, 1 + hic_data.chromosomes[c]))):
beg, end = hic_data.section_pos[c]
dist_intr.append([])
for i in xrange(beg, end - diff):
dist_intr[-1].append(hic_data[i, i + diff])
mean_intrp = []
for d in dist_intr:
if len(d):
mean_intrp.append(float(np.nansum(d)) / len(d))
else:
mean_intrp.append(0.0)
xp, yp = range(min_diff, max_diff), mean_intrp
x = []
y = []
for k in xrange(len(xp)):
if yp[k]:
x.append(xp[k])
y.append(yp[k])
l = plt.plot(x, y, '-', label=c, alpha=0.8)
plt.hlines(mean_intrp[2], 3, 5.25 + np.exp(cnum / 4.3), color=l[0].get_color(),
linestyle='--', alpha=0.5)
plt.text(5.25 + np.exp(cnum / 4.3), mean_intrp[2], c, color=l[0].get_color())
plt.plot(3, mean_intrp[2], '+', color=l[0].get_color())
plt.xscale('log')
plt.yscale('log')
plt.ylabel('number of interactions')
plt.xlabel('Distance between bins (in 100 kb bins)')
plt.grid()
plt.ylim(2, 250)
_ = plt.xlim(1, 110)
fig.savefig('/home/besy/Hi-C/MOREX/TCC35_V2_results/filtered/TCC35_V2_decay.png', dpi=fig.dpi)
I think that problem is in scale I need y-axis to start from 10^-1 (0.1), in order to change this I tried this:
min_diff = 0.1
.
.
.
dist_intr = []
for diff in xrange(min_diff, min((max_diff, 0.1 + hic_data.chromosomes[c]))):
.
.
.
plt.ylim((0.1, 20))
But this values return: "integer argument expected, got float"
I also tried to play with:
max_diff, plt.ylim and plt.xlim parameters little bit, but nothing changed to much.
I would like to ask you what parameter/s and how I need change to generate image of the correctly focused graph. Thank you in advance.
I will try to specify what I say. Firstly show you the code:
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(1977)
x, y = np.random.random((2, 1000))
xbins = np.linspace(0, 1, 10)
ybins = np.linspace(0, 1, 10)
counts, _, _ = np.histogram2d(x, y, bins=(xbins, ybins))
print counts
You will get a two dimension array from this code, now if I have another array
z = np.random.random((2, 1000))
Then how to get a three dimension array of distribution from these three arrays. I tried:
zbins = np.linspace(0, 1, 10)
counts, _,_,_ = np.histogramdd(x, y, z, bins=(xbins, ybins, zbins))
But it's no use.
What's more, the really data file is too big to use loop statement, which will cost me hours to run it over, and it will not easy for me to check.
Thanks for thinking about the question!
I made the following code according to your last comment
import numpy as np
data = np.random.random((1000, 3))
nbins = 10
H, [bx, by, bz]=np.histogramdd(data, bins=(nbins,nbins,nbins),
range=((0,1),(0,1),(0,1)))
And H is the summary of the number of points in each grid. In your previous code, histogramdd was not used correctly. The input data is the first argument which should be N x 3 array in your case.
You can see the document of histogramdd here.
I have a script for plotting astronomical data of redmapping clusters using a csv file. I could get the data points in it and want to plot them using different colors depending on their redshift values: I am binning the dataset into 3 bins (0.1-0.2, 0.2-0.25, 0.25,0.31) based on the redshift.
The problem arises with my code after I distinguish to what bin the datapoint belongs: I want to have 3 labels in the legend corresponding to red, green and blue data points, but this is not happening and I don't know why. I am using plot() instead of scatter() as I also had to do the best fit from the data in the same figure. So everything needs to be in 1 figure.
import numpy as np
import matplotlib.pyplot as py
import csv
z = open("Sheet4CSV.csv","rU")
data = csv.reader(z)
x = []
y = []
ylow = []
yupp = []
xlow = []
xupp = []
redshift = []
for r in data:
x.append(float(r[2]))
y.append(float(r[5]))
xlow.append(float(r[3]))
xupp.append(float(r[4]))
ylow.append(float(r[6]))
yupp.append(float(r[7]))
redshift.append(float(r[1]))
from operator import sub
xerr_l = map(sub,x,xlow)
xerr_u = map(sub,xupp,x)
yerr_l = map(sub,y,ylow)
yerr_u = map(sub,yupp,y)
py.xlabel("$Original\ Tx\ XCS\ pipeline\ Tx\ keV$")
py.ylabel("$Iterative\ Tx\ pipeline\ keV$")
py.xlim(0,12)
py.ylim(0,12)
py.title("Redmapper Clusters comparison of Tx pipelines")
ax1 = py.subplot(111)
##Problem starts here after the previous line##
for p in redshift:
for i in xrange(84):
p=redshift[i]
if 0.1<=p<0.2:
ax1.plot(x[i],y[i],color="b", marker='.', linestyle = " ")#, label = "$z < 0.2$")
exit
if 0.2<=p<0.25:
ax1.plot(x[i],y[i],color="g", marker='.', linestyle = " ")#, label="$0.2 \leq z < 0.25$")
exit
if 0.25<=p<=0.3:
ax1.plot(x[i],y[i],color="r", marker='.', linestyle = " ")#, label="$z \geq 0.25$")
exit
##There seems nothing wrong after this point##
py.errorbar(x,y,yerr=[yerr_l,yerr_u],xerr=[xerr_l,xerr_u], fmt= " ",ecolor='magenta', label="Error bars")
cof = np.polyfit(x,y,1)
p = np.poly1d(cof)
l = np.linspace(0,12,100)
py.plot(l,p(l),"black",label="Best fit")
py.plot([0,15],[0,15],"black", linestyle="dotted", linewidth=2.0, label="line $y=x$")
py.grid()
box = ax1.get_position()
ax1.set_position([box.x1,box.y1,box.width, box.height])
py.legend(loc='center left',bbox_to_anchor=(1,0.5))
py.show()
In the 1st 'for' loop, I have indexed every value 'p' in the list 'redshift' so that bins can be created using 'if' statement. But if I add the labels that are hashed out against each py.plot() inside the 'if' statements, each data point 'i' that gets plotted in the figure as an intersection of (x[i],y[i]) takes the label and my entire legend attains in total 87 labels (including the 3 mentioned in the code at other places)!!!!!!
I essentially need 1 label for each bin...
Please tell me what needs to done after the bins are created and py.plot() commands used...Thanks in advance :-)
Sorry I cannot post my image here due to low reputation!
The data 'appended' for x, y and redshift lists from the csv file are as follows:
x=[5.031,10.599,10.589,8.548,9.089,8.675,3.588,1.244,3.023,8.632,8.953,7.603,7.513,2.917,7.344,7.106,3.889,7.287,3.367,6.839,2.801,2.316,1.328,6.31,6.19,6.329,6.025,5.629,6.123,5.892,5.438,4.398,4.542,4.624,4.501,4.504,5.033,5.068,4.197,2.854,4.784,2.158,4.054,3.124,3.961,4.42,3.853,3.658,1.858,4.537,2.072,3.573,3.041,5.837,3.652,3.209,2.742,2.732,1.312,3.635,2.69,3.32,2.488,2.996,2.269,1.701,3.935,2.015,0.798,2.212,1.672,1.925,3.21,1.979,1.794,2.624,2.027,3.66,1.073,1.007,1.57,0.854,0.619,0.547]
y=[5.255,10.897,11.045,9.125,9.387,17.719,4.025,1.389,4.152,8.703,9.051,8.02,7.774,3.139,7.543,7.224,4.155,7.416,3.905,6.868,2.909,2.658,1.651,6.454,6.252,6.541,6.152,5.647,6.285,6.079,5.489,4.541,4.634,8.851,4.554,4.555,5.559,5.144,5.311,5.839,5.364,3.18,4.352,3.379,4.059,4.575,3.914,5.736,2.304,4.68,3.187,3.756,3.419,9.118,4.595,3.346,3.603,6.313,1.816,4.34,2.732,4.978,2.719,3.761,2.623,2.1,4.956,2.316,4.231,2.831,1.954,2.248,6.573,2.276,2.627,3.85,3.545,25.405,3.996,1.347,1.679,1.435,0.759,0.677]
redshift = [0.12,0.25,0.23,0.23,0.27,0.26,0.12,0.27,0.17,0.18,0.17,0.3,0.23,0.1,0.23,0.29,0.29,0.12,0.13,0.26,0.11,0.24,0.13,0.21,0.17,0.2,0.3,0.29,0.23,0.27,0.25,0.21,0.11,0.15,0.1,0.26,0.23,0.12,0.23,0.26,0.2,0.17,0.22,0.26,0.25,0.12,0.19,0.24,0.18,0.15,0.27,0.14,0.14,0.29,0.29,0.26,0.15,0.29,0.24,0.24,0.23,0.26,0.29,0.22,0.13,0.18,0.24,0.14,0.24,0.24,0.17,0.26,0.29,0.11,0.14,0.26,0.28,0.26,0.28,0.27,0.23,0.26,0.23,0.19]
Working with numerical data like this, you should really consider using a numerical library, like numpy.
The problem in your code arises from processing each record (a coordinate (x,y) and the corresponding value redshift) one at a time. You are calling plot for each point, thereby creating legends for each of those 84 datapoints. You should consider your "bins" as groups of data that belong to the same dataset and process them as such. You could use "logical masks" to distinguish between your "bins", as shown below.
It's also not clear why you call exit after each plotting action.
import numpy as np
import matplotlib.pyplot as plt
x = np.array([5.031,10.599,10.589,8.548,9.089,8.675,3.588,1.244,3.023,8.632,8.953,7.603,7.513,2.917,7.344,7.106,3.889,7.287,3.367,6.839,2.801,2.316,1.328,6.31,6.19,6.329,6.025,5.629,6.123,5.892,5.438,4.398,4.542,4.624,4.501,4.504,5.033,5.068,4.197,2.854,4.784,2.158,4.054,3.124,3.961,4.42,3.853,3.658,1.858,4.537,2.072,3.573,3.041,5.837,3.652,3.209,2.742,2.732,1.312,3.635,2.69,3.32,2.488,2.996,2.269,1.701,3.935,2.015,0.798,2.212,1.672,1.925,3.21,1.979,1.794,2.624,2.027,3.66,1.073,1.007,1.57,0.854,0.619,0.547])
y = np.array([5.255,10.897,11.045,9.125,9.387,17.719,4.025,1.389,4.152,8.703,9.051,8.02,7.774,3.139,7.543,7.224,4.155,7.416,3.905,6.868,2.909,2.658,1.651,6.454,6.252,6.541,6.152,5.647,6.285,6.079,5.489,4.541,4.634,8.851,4.554,4.555,5.559,5.144,5.311,5.839,5.364,3.18,4.352,3.379,4.059,4.575,3.914,5.736,2.304,4.68,3.187,3.756,3.419,9.118,4.595,3.346,3.603,6.313,1.816,4.34,2.732,4.978,2.719,3.761,2.623,2.1,4.956,2.316,4.231,2.831,1.954,2.248,6.573,2.276,2.627,3.85,3.545,25.405,3.996,1.347,1.679,1.435,0.759,0.677])
redshift = np.array([0.12,0.25,0.23,0.23,0.27,0.26,0.12,0.27,0.17,0.18,0.17,0.3,0.23,0.1,0.23,0.29,0.29,0.12,0.13,0.26,0.11,0.24,0.13,0.21,0.17,0.2,0.3,0.29,0.23,0.27,0.25,0.21,0.11,0.15,0.1,0.26,0.23,0.12,0.23,0.26,0.2,0.17,0.22,0.26,0.25,0.12,0.19,0.24,0.18,0.15,0.27,0.14,0.14,0.29,0.29,0.26,0.15,0.29,0.24,0.24,0.23,0.26,0.29,0.22,0.13,0.18,0.24,0.14,0.24,0.24,0.17,0.26,0.29,0.11,0.14,0.26,0.28,0.26,0.28,0.27,0.23,0.26,0.23,0.19])
bin3 = 0.25 <= redshift
bin2 = np.logical_and(0.2 <= redshift, redshift < 0.25)
bin1 = np.logical_and(0.1 <= redshift, redshift < 0.2)
plt.ion()
labels = ("$z < 0.2$", "$0.2 \leq z < 0.25$", "$z \geq 0.25$")
colors = ('r', 'g', 'b')
for bin, label, co in zip( (bin1, bin2, bin3), labels, colors):
plt.plot(x[bin], y[bin], color=co, ls='none', marker='o', label=label)
plt.legend()
plt.show()