I would like to draw a graph with predefined values from 0 to 605 for example. My pc is not powerful enough to calculate everything so I would like to calculate only some points and connect them all to have a curve on the interval [0;605]. How can I do this? Is this possible?
I tried to put a step, but it automatically reduces the interval.
In my current code it only shows me 605/10 values = 60, so the range of the graph is from 0 to 60 for the x-axis.
tab=[]
for k in range(1,605,10):
img2 = rgb(k)
d = psnr(img1,img2)
tab.append(d)
plt.plot(tab)
plt.xlabel("k")
plt.ylabel("PSNR")
plt.show()
You can set the xticks by yourself: plt.xticks(x_values, [0, 10, 20, 30, ...])
You need to plot with the pattern plt.plot(x, y) instead of only plt.plot(y).
A quick fix (just to show the idea) can be:
Create an empty x, just like tab: x = []
Append k to x in the loop: x.append(k)
Do plt.plot(x, tab), instead of plt.plot(tab)
Related
Currently I have a plot that look like this.
How do I increase the size of each point by the count? In other words, if a certain point has 9 counts, how do I increase it so that it is bigger than another point with only 2 counts?
If you look closely, I think there are overlaps (one point has both grey and orange circles). How do I make it so there's a clear difference?
In case you have no idea what I mean by "Plotting a 3-dimensional graph by increasing the size of the points", this below is what I mean, where the z-axis is the count
This answer doesn't really answer the question straight up, but please consider this multivariate solution seaborn has:
The syntax is way easier to write than using matplotlib.
seaborn.jointplot(
data = data,
x = x_name,
y = y_name,
hue = label_name
)
And voila! You should get something that looks like this
See: https://seaborn.pydata.org/generated/seaborn.jointplot.html
Using matplotlib library you could iterate on your data and count it (some example below).
import numpy as np
import matplotlib.pyplot as plt
# Generate Data
N = 100
l_bound = -10
u_bound = 10
s_0 = 40
array = np.random.randint(l_bound, u_bound, (N, 2))
# Plot it
points, counts = np.unique(array, axis = 0, return_counts = True)
for point_, count_ in zip(points, counts):
plt.scatter(point_[0], point_[1], c = np.random.randint(0, 3), s = s_0* count_**2, vmin = 0, vmax = 2)
plt.colorbar()
plt.show()
Result
You can probably do the same with Plotly to have something fancier and closer to your second picture.
Cheers
Try this:
x_name = 'x_name'
y_name = 'y_name'
z_name = 'z_name'
scatter_data = pd.DataFrame(data[[x_name, y_name, z_name]].value_counts())
scatter_data.reset_index(inplace=True)
plt.scatter(
scatter_data.loc[:, x_name],
scatter_data.loc[:, y_name],
s=scatter_data.loc[:, 0],
c=scatter_data.loc[:, z_name]
)
The thing is that the reason why your scatter plot looks like this is because every point which is at (1, 1) or (0,1) is overlapping.
With the plt.scatter argument (s=), you can specify the size of the points. If you print scatter_data, it is a "group by" clause with the count of each of the indexes.
It should look something like this, with column 0 being the count.
It should look something like the above.
I am trying to plot a heatmap from a 2000x2000 NumPy array. I have tried every solution from this post and many others. I have tried many cmaps and interpolation combinations.
This is the code that prepares the data:
def parse_cords(cord: float):
cord = str(cord).split(".")
h_map[int(cord[0])][int(cord[1])] += 1
df["coordinate"] is a pandas series of floats x,y coordinate. x and y are ranging from 0 to 1999.
I have decided to modify the array so that values will range from 0 to 1, but I have tested the code also without changing the range.
h_map = np.zeros((2000, 2000), dtype='int')
cords = df["coordinate"].map(lambda cord: parse_cords(cord))
maximum = float(np.max(h_map))
precent = lambda x: x/maximum
h_map = precent(h_map)
h_map looks like this:
[[0.58396242 0.08840799 0.03153833 ... 0.00285187 0.00419393 0.06324442]
[0.09075658 0.11172622 0.01476262 ... 0.00134206 0.00687804 0.0082201 ]
[0.02986076 0.01862104 0.03959067 ... 0.00100654 0.00134206 0.00251636]
...
[0.00301963 0.00134206 0.00134206 ... 0.00100654 0.00150981 0.00553598]
[0.00419393 0.00268411 0.00100654 ... 0.00201309 0.00402617 0.01342057]
[0.05183694 0.00251636 0.00184533 ... 0.00301963 0.00838785 0.1016608 ]]
Now the plot:
fig, ax = plt.subplots(figsize=figsize)
ax = plt.imshow(h_map)
And result:
final plot
The result is always a heatmap with only a single color depending on the cmap used. Is my array just too big to be plotted like this or am I doing something wrong?
EDIT:
I have added plt.colorbar() and removed scaling from 0 to 1. The plot knows the range of data (0 to 5500) but assumes that every value is equal to 0.
I think that is because you only provide one color channel. Therefore, plt.imshow() interprets the data as black and white image. You could either add more channels or use a different function e.g. sns.heatmap().
from seaborn import sns
I am trying to segment the time-series data as shown in the figure. I have lots of data from the sensors, any of these data can have different number of isolated peaks region. In this figure, I have 3 of those. I would like to have a function that takes the time-series as the input and returns the segmented sections of equal length.
My initial thought was to have a sliding window that calculates the relative change in the amplitude. Since the window with the peaks will have relatively higher changes, I could just define certain threshold for the relative change that would help me take the window with isolated peaks. However, this will create problem when choosing the threshold as the relative change is very sensitive to the noises in the data.
Any suggestions?
To do this you need to find signal out of noise.
get mean value of you signal and add some multiplayer that place borders on top and on bottom of noise - green dashed line
find peak values below bottom of noise -> array 2 groups of data
find peak values on top of noise -> array 2 groups of data
get min index of bottom first peak and max index of top of first peak to find first peak range
get min index of top second peak and max index of bottom of second peak to find second peak range
Some description in code. With this method you can find other peaks.
One thing that you need to input by hand is to tell program thex value between peaks for splitting data into parts.
See graphic for summary.
import numpy as np
from matplotlib import pyplot as plt
# create noise data
def function(x, noise):
y = np.sin(7*x+2) + noise
return y
def function2(x, noise):
y = np.sin(6*x+2) + noise
return y
noise = np.random.uniform(low=-0.3, high=0.3, size=(100,))
x_line0 = np.linspace(1.95,2.85,100)
y_line0 = function(x_line0, noise)
x_line = np.linspace(0, 1.95, 100)
x_line2 = np.linspace(2.85, 3.95, 100)
x_pik = np.linspace(3.95, 5, 100)
y_pik = function2(x_pik, noise)
x_line3 = np.linspace(5, 6, 100)
# concatenate noise data
x = np.linspace(0, 6, 500)
y = np.concatenate((noise, y_line0, noise, y_pik, noise), axis=0)
# plot data
noise_band = 1.1
top_noise = y.mean()+noise_band*np.amax(noise)
bottom_noise = y.mean()-noise_band*np.amax(noise)
fig, ax = plt.subplots()
ax.axhline(y=y.mean(), color='red', linestyle='--')
ax.axhline(y=top_noise, linestyle='--', color='green')
ax.axhline(y=bottom_noise, linestyle='--', color='green')
ax.plot(x, y)
# split data into 2 signals
def split(arr, cond):
return [arr[cond], arr[~cond]]
# find bottom noise data indexes
botom_data_indexes = np.argwhere(y < bottom_noise)
# split by visual x value
splitted_bottom_data = split(botom_data_indexes, botom_data_indexes < np.argmax(x > 3))
# find top noise data indexes
top_data_indexes = np.argwhere(y > top_noise)
# split by visual x value
splitted_top_data = split(top_data_indexes, top_data_indexes < np.argmax(x > 3))
# get first signal range
first_signal_start = np.amin(splitted_bottom_data[0])
first_signal_end = np.amax(splitted_top_data[0])
# get x index of first signal
x_first_signal = np.take(x, [first_signal_start, first_signal_end])
ax.axvline(x=x_first_signal[0], color='orange')
ax.axvline(x=x_first_signal[1], color='orange')
# get second signal range
second_signal_start = np.amin(splitted_top_data[1])
second_signal_end = np.amax(splitted_bottom_data[1])
# get x index of first signal
x_second_signal = np.take(x, [second_signal_start, second_signal_end])
ax.axvline(x=x_second_signal[0], color='orange')
ax.axvline(x=x_second_signal[1], color='orange')
plt.show()
Output:
red line = mean value of all data
green line - top and bottom noise borders
orange line - selected peak data
1, It depends on how you want to define a "region", but looks like you just have feeling instead of strict definition. If you have a very clear definition of what kind of piece you want to cut out, you can try some method like "matched filter"
2, You might want to detect the peak of absolute magnitude. If not working, try peak of absolute magnitude of first-order difference, even 2nd-order.
3, it is hard to work on the noisy data like this. My suggestion is to do filtering before you pick up sections (on unfiltered data). Filtering will give you smooth peaks so that the position of peaks can be detected by the change of derivative sign. For filtering, try just "low-pass filter" first. If it doesn't work, I also suggest "Hilbert–Huang transform".
*, Looks like you are using matlab. The methods mentioned are all included in matlab.
I'm using pylab.plot() in a for loop, and for some reason the legend has 6 entries, even though the for loop is only executed 3 times
#Plot maximum confidence
pylab.figure()
for numPeers in sorted(peers.keys()):
percentUni, maxes = peers[numPeers]
labels = list(set([i[1] for i in sorted(maxes,
key=itemgetter(1))]))
percentUni = [i[0] for i in sorted(maxes, key=itemgetter(1))]
x = []
y = []
ci = []
for l in xrange(len(labels)):
x.append(l+1)
y.append(max(maxes[l*3:l*3+3]))
pylab.plot(x, y, marker='o', label = "N=%d"%numPeers)
pylab.title('Maximal confidence in sender')
pylab.xlabel('Contribute Interval')
pylab.ylabel('Percent confident')
pylab.ylim([0,1])
pylab.xlim([0.5, 7.5])
pylab.xticks(xrange(1,8), labels)
pylab.legend(loc='upper right')
The plot looks like this, with each legend entry having exactly 2 copies.
I know the loop only runs 3x, because if I put in a print statement to debug, it only prints the string 3x.
I did see this in my search, but didn't find it helpful:
Duplicate items in legend in matplotlib?
I had a similar problem. What I ended up doing is add plt.close() at the beginning of my loop. I suspect you're seeing 6 because you have a nested loop where you're changing the x and y.
It ended up being a bug/type on my part, where I was supposed to write
maxes = [i[0] for i in sorted(maxes, key=itemgetter(1))]
instead of
percentUni = [i[0] for i in sorted(maxes, key=itemgetter(1))]
This mistake meant that maxes remained a list of 2-tuples instead of a list of integers, which is why things were plotted twice. And because I restricted the y-axis, I never saw that there were additional data elements plotted.
Thanks for your help, those who did answer!
Suppose I've been driving a set route with a 3g modem and GPS on my laptop, while my computer back at home records the ping delay. I've correlated ping with GPS lat/long, and now I'd like to visualise this data.
I've got about 80,000 points of data per day, and I'd like to display several month's worth. I'm especially interested in displaying areas where ping consistently times out (ie ping == 1000).
Scatter plot
My first attempt was with a scatter plot, with one point per data entry. I made the size of the point 5x larger if it was a timeout, so it was obvious where these areas were. I also dropped the alpha to 0.1, for a crude way to see overlaid points.
# Colour
c = pings
# Size
s = [2 if ping < 1000 else 10 for ping in pings]
# Scatter plot
plt.scatter(longs, lats, s=s, marker='o', c=c, cmap=cm.jet, edgecolors='none', alpha=0.1)
The obvious problem with this is that it displays one marker per data point, which is a very poor way to display large amounts of data. If I've drive past the same area twice, then the first pass data is just displayed on top of the second pass.
Interpolate over an even grid
I then had a try at using numpy and scipy to interpolate over an even grid.
# Convert python list to np arrays
x = np.array(longs, dtype=float)
y = np.array(lats, dtype=float)
z = np.array(pings, dtype=float)
# Make even grid (200 rows/cols)
xi = np.linspace(min(longs), max(longs), 200)
yi = np.linspace(min(lats), max(lats), 200)
# Interpolate data points to grid
zi = griddata((x, y), z, (xi[None,:], yi[:,None]), method='linear', fill_value=0)
# Plot contour map
plt.contour(xi,yi,zi,15,linewidths=0.5,colors='k')
plt.contourf(xi,yi,zi,15,cmap=plt.cm.jet)
From this example
This looks interesting (lots of colours and shapes), but it extrapolates too far around areas I haven't explored. You can't see the routes I've travelled, just red/blue blotches.
If I've driven in a large curve, it'll interpolate for the area between (see below):
Interpolate over an uneven grid
I then had a try at using meshgrid (xi, yi = np.meshgrid(lats, longs)) instead of a fixed grid, but I'm told my array is too big.
Is there an easy way I can create a grid from my points?
My requirements:
Handle large data sets (80,000 x 60 = ~5m points)
Display duplicate data for each point either by averaging (I assume interpolation will do this), or by taking a minimum value for each point.
Don't extrapolate too far from data points
I'm happy with a scatter plot (top), but I need some way to average the data before I display it.
(Apologies for the dodgy mspaint drawings, I can't upload actual data)
Solution:
# Get sum
hsum, long_range, lat_range = np.histogram2d(longs, lats, bins=(res_long,res_lat), range=((a,b),(c,d)), weights=pings)
# Get count
hcount, ignore1, ignore2 = np.histogram2d(longs, lats, bins=(res_long,res_lat), range=((a,b),(c,d)))
# Get average
h = hsum/hcount
x, y = np.where(h)
average = h[x, y]
# Make scatter plot
scatterplot = ax.scatter(long_range[x], lat_range[y], s=3, c=average, linewidths=0, cmap="jet", vmin=0, vmax=1000)
To simplify your question, you have two set of points, one for ping<1000, one for ping>=1000.
Since the count of points is very large, you can't plot them directly by scatter(). I created some sample data by:
longs = (np.random.rand(60, 1) + np.linspace(-np.pi, np.pi, 80000)).reshape(-1)
lats = np.sin(longs) + np.random.rand(len(longs)) * 0.1
bad_index = (longs>0) & (longs<1)
bad_longs = longs[bad_index]
bad_lats = lats[bad_index]
(longs, lats) is points for ping<1000, (bad_longs, bad_lats) is points for ping>1000
You can use numpy.histogram2d() to count the points:
ranges = [[np.min(lats), np.max(lats)], [np.min(longs), np.max(longs)]]
h, lat_range, long_range = np.histogram2d(lats, longs, bins=(400,400), range=ranges)
bad_h, lat_range2, long_range2 = np.histogram2d(bad_lats, bad_longs, bins=(400,400), range=ranges)
h and bad_h are the points count in every little squere area.
Then you can choose many methods to visualize it. For example, you can plot it by scatter():
y, x = np.where(h)
count = h[y, x]
pl.scatter(long_range[x], lat_range[y], s=count/20, c=count, linewidths=0, cmap="Blues")
count = bad_h[y, x]
pl.scatter(long_range2[x], lat_range2[y], s=count/20, c=count, linewidths=0, cmap="Reds")
pl.show()
Here is the full code:
import numpy as np
import pylab as pl
longs = (np.random.rand(60, 1) + np.linspace(-np.pi, np.pi, 80000)).reshape(-1)
lats = np.sin(longs) + np.random.rand(len(longs)) * 0.1
bad_index = (longs>0) & (longs<1)
bad_longs = longs[bad_index]
bad_lats = lats[bad_index]
ranges = [[np.min(lats), np.max(lats)], [np.min(longs), np.max(longs)]]
h, lat_range, long_range = np.histogram2d(lats, longs, bins=(300,300), range=ranges)
bad_h, lat_range2, long_range2 = np.histogram2d(bad_lats, bad_longs, bins=(300,300), range=ranges)
y, x = np.where(h)
count = h[y, x]
pl.scatter(long_range[x], lat_range[y], s=count/20, c=count, linewidths=0, cmap="Blues")
count = bad_h[y, x]
pl.scatter(long_range2[x], lat_range2[y], s=count/20, c=count, linewidths=0, cmap="Reds")
pl.show()
The output figure is:
The GDAL libraries including the Python API and associated utilities, particularly gdal_grid should work for you. It includes a number of interpolation and averaging methods and options for generating gridded data from scattered points. You should be able to manipulate the grid cell size to get a pleasing resolution.
GDAL handles a number of data formats, but you should be able to pass your coordinates and ping values as CSV and get back a PNG or JPEG without much trouble.
Keep in mind lat/lon data is not a planar coordinate system. If you intend to incorporate you results with other map data you'll have to figure out what map projection, units, etc. to use.