This code is inspired by Bokeh:datashader. I have a time series of millions of data points which I would like to visualize in a browser. But, 1. It is slow to transmit and plot so many points in JS. 2. Even if the browser is powerful enough to draw all those points, the points will for sure lie on top of each other since the computer scree has at most a few thousands of pixels in one direction. The idea of datashader is to aggregate the data so that I plot at most one point per pixel. This way I make full use of the screen without losing any information visually. However, datashader overqualified for my application and it is not flexible enough for my situation. So I wrote the following code to do some simple downsampling for time series.
# To deal with time series, first need to convert pandas timestamp to int64 # df['time']=df.time.values.astype(np.int64)/1e6 import pandas as pd import numpy as np def sampling1d(dataframe,x,y,width,xmin=None,xmax=None): df=dataframe[[x,y]] if xmin is not None: df=df[df[x]>=xmin] if xmax is not None: df=df[df[x]<=xmax] bin_edges=np.linspace(df[x].min(),df[x].max(),width+1) bins=np.searchsorted(bin_edges, df[x]) bins[bins==0]=1 agg=df.groupby(bins) df2=pd.DataFrame() df2[x]=agg[x].max() df2[y+'_mean']=agg[y].mean() df2[y+'_min']=agg[y].min() df2[y+'_max']=agg[y].max() return df2
Here is a version for sampling big data in 2D
def downsample2d(x,y,logx=False,logy=False,width=500,height=500,weights=None): if logx: binx=np.logspace(np.log10(np.min(x)),np.log10(np.max(x)),width) else: binx=width if logy: biny=np.logspace(np.log10(np.min(y)),np.log10(np.max(y)),height) else: biny=height z,binx2,biny2=np.histogram2d(x,y,bins=[binx, biny]) xi,yi=z.nonzero() binx2=(binx2[:-1] + binx2[1:])/2 biny2=(biny2[:-1] + biny2[1:])/2 if weights is not None: z2,_,_=np.histogram2d(x,y,bins=[binx, biny],weights=weights) return binx2[xi],biny2[yi],z2[xi,yi]/z[xi,yi] return binx2[xi],biny2[yi]