Downsampling large time series for visualization

This code is inspired by Bokeh:datashader. I have a time series of millions of data points which I would like to visualize in a browser. But, 1. It is slow to transmit and plot so many points in JS. 2. Even if the browser is powerful enough to draw all those points, the points will for sure lie on top of each other since the computer scree has at most a few thousands of pixels in one direction. The idea of datashader is to aggregate the data so that I plot at most one point per pixel. This way I make full use of the screen without losing any information visually. However, datashader overqualified for my application and it is not flexible enough for my situation. So I wrote the following code to do some simple downsampling for time series.

# To deal with time series, first need to convert pandas timestamp to int64
# df['time']=df.time.values.astype(np.int64)/1e6

import pandas as pd
import numpy as np
def sampling1d(dataframe,x,y,width,xmin=None,xmax=None):
    if xmin is not None:
    if xmax is not None:
    bins=np.searchsorted(bin_edges, df[x])
    return df2

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s