Tracks Module

The tracks module provides functionality for tracking objects across frames and managing track data.

Core Classes

track.Track

Represents a single object trajectory across multiple frames.

tracker.Tracker

Tracker(name: str, tracks: List[vista.tracks.track.Track])

The Track class represents a temporal sequence of detections, while Tracker provides the base class for tracking algorithms.

Tracking Algorithms

VISTA includes several tracking algorithms:

  • Simple Tracker: Nearest-neighbor tracking

  • Kalman Tracker: Kalman filter-based tracking with motion prediction

  • Network Flow Tracker: Global optimization using network flow

  • Tracklet Tracker: Two-stage tracklet-based tracking

See Object Tracking for detailed information on each algorithm.

Basic Usage

Creating and Using Tracks

from vista.tracks import Track

# Create a track
track = Track(track_id=1)

# Add detections to track
track.add_detection(frame=0, x=100, y=150)
track.add_detection(frame=1, x=102, y=151)

# Access track properties
print(f"Track length: {len(track)}")
print(f"Track duration: {track.duration} frames")

Slicing and Subsetting

# Get track segment
track_segment = track[10:20]

# Copy a track
track_copy = track.copy()

Running Tracking

from vista.algorithms.trackers.simple_tracker import SimpleTracker

# Create tracker
tracker = SimpleTracker(max_distance=5.0)

# Track detections
tracks = tracker.track(detections)

# Process results
for track in tracks:
    print(f"Track {track.id}: {len(track)} detections")

Track Analysis

VISTA provides tools for track analysis and refinement:

from vista.algorithms.tracks.interpolation import interpolate_tracks
from vista.algorithms.tracks.savitzky_golay import smooth_tracks

# Interpolate missing detections
interpolated_tracks = interpolate_tracks(tracks)

# Smooth track trajectories
smoothed_tracks = smooth_tracks(tracks, window_size=5, poly_order=2)

Module Reference

Track Class

Track data model for representing temporal object trajectories.

This module defines the Track class, which represents a single object trajectory across multiple frames with support for multiple coordinate systems (pixel, geodetic, time-based), visualization styling, and data persistence.

class vista.tracks.track.Track(name, frames, rows, columns, sensor, color='g', marker='o', line_width=2, marker_size=12, visible=True, tail_length=0, complete=False, show_line=True, line_style='SolidLine', labels=<factory>, extraction_metadata=None, covariance_00=None, covariance_01=None, covariance_11=None, show_uncertainty=False)[source]

Bases: object

Represents a single object trajectory across multiple frames.

A Track contains temporal position data for a moving object, with support for multiple coordinate systems (pixel, geodetic, time-based) and rich visualization options. Tracks can be created manually, loaded from files, or generated by tracking algorithms.

Parameters:
  • name (str) – Unique identifier for this track

  • frames (NDArray[np.int_]) – Frame numbers where track positions are defined

  • rows (NDArray[np.float64]) – Row (vertical) pixel coordinates for each frame

  • columns (NDArray[np.float64]) – Column (horizontal) pixel coordinates for each frame

  • sensor (Sensor) – Sensor object providing coordinate conversion capabilities

color

Color for track visualization, by default ‘g’ (green)

Type:

str, optional

marker

Marker style for current position (‘o’, ‘s’, ‘t’, ‘d’, ‘+’, ‘x’, ‘star’), by default ‘o’ (circle)

Type:

str, optional

line_width

Width of line connecting track points, by default 2

Type:

int, optional

marker_size

Size of current position marker, by default 12

Type:

int, optional

visible

Whether track is visible in viewer, by default True

Type:

bool, optional

tail_length

Number of previous frames to show (0 = all history), by default 0

Type:

int, optional

complete

If True, show entire track regardless of current frame, by default False

Type:

bool, optional

show_line

Whether to draw line connecting track points, by default True

Type:

bool, optional

line_style

Qt line style (‘SolidLine’, ‘DashLine’, ‘DotLine’, ‘DashDotLine’, ‘DashDotDotLine’), by default ‘SolidLine’

Type:

str, optional

labels

Set of text labels for categorizing/filtering tracks, by default empty set

Type:

set[str], optional

extraction_metadata

Extraction metadata containing image chips and signal detection results. Dictionary with keys: ‘chip_size’ (int), ‘chips’ (NDArray with shape (n_points, diameter, diameter)), ‘signal_masks’ (boolean NDArray with same shape), ‘noise_stds’ (NDArray with shape (n_points,)), by default None

Type:

dict, optional

__getitem__(slice)[source]

Slice track by frame range

get_times()[source]

Get timestamps for each track point using sensor imagery times

from_dataframe(df, sensor, name)[source]

Create Track from pandas DataFrame with coordinate conversion

length()

Property that returns cumulative Euclidean distance along track

copy()[source]

Create a deep copy of the track

to_csv(file)

Save track to CSV file

to_dataframe()[source]

Convert track to pandas DataFrame

Notes

  • Track coordinates can be provided as pixel (row/col) or geodetic (lat/lon/alt)

  • Times can be used instead of frames with automatic conversion via sensor

  • The from_dataframe() method handles coordinate system conversions automatically

  • Track length is computed lazily and cached for performance

name: str
frames: ndarray[tuple[Any, ...], dtype[int64]]
rows: ndarray[tuple[Any, ...], dtype[float64]]
columns: ndarray[tuple[Any, ...], dtype[float64]]
sensor: Sensor
color: str = 'g'
marker: str = 'o'
line_width: int = 2
marker_size: int = 12
visible: bool = True
tail_length: int = 0
complete: bool = False
show_line: bool = True
line_style: str = 'SolidLine'
labels: set[str]
extraction_metadata: dict | None = None
covariance_00: ndarray[tuple[Any, ...], dtype[float64]] | None = None
covariance_01: ndarray[tuple[Any, ...], dtype[float64]] | None = None
covariance_11: ndarray[tuple[Any, ...], dtype[float64]] | None = None
show_uncertainty: bool = False
uuid: str = None
__getitem__(s)[source]
__len__()[source]
get_track_data_at_frame(frame_num)[source]

Get track position at a specific frame using O(1) cached lookup.

Parameters:

frame_num (int) – Frame number to query

Returns:

(row, column) coordinates at this frame, or None if frame not in track

Return type:

tuple or None

get_visible_indices(current_frame)[source]

Get indices of track points that should be visible at the current frame.

Parameters:

current_frame (int) – Current frame number

Returns:

Array of indices for visible track points, or None if no points visible

Return type:

NDArray or None

invalidate_caches()[source]

Invalidate cached data structures when track data changes.

get_pen(width=None, style=None)[source]

Get cached PyQtGraph pen object, creating only if parameters changed.

Parameters:
  • width (int, optional) – Line width override, uses self.line_width if None

  • style (str, optional) – Line style override, uses self.line_style if None

Returns:

PyQtGraph pen object

Return type:

pg.mkPen

get_brush()[source]

Get cached PyQtGraph brush object for marker fill, creating only if parameters changed.

Returns:

PyQtGraph brush object

Return type:

pg.mkBrush

has_uncertainty()[source]

Check if track has uncertainty data.

Returns:

True if track has all three covariance matrix elements (C00, C01, C11), False otherwise

Return type:

bool

get_uncertainty_ellipse_parameters()[source]

Convert covariance matrix to ellipse parameters for visualization.

Computes the semi-major axis length, semi-minor axis length, and rotation angle from the 2D covariance matrix at each track point.

Returns:

Tuple of (semi_major_axis, semi_minor_axis, rotation_degrees) arrays, or None if no uncertainty data. Rotation is in degrees, counter-clockwise from horizontal (positive column axis).

Return type:

Optional[tuple[NDArray[np.float64], NDArray[np.float64], NDArray[np.float64]]]

get_uncertainty_radius()[source]

Compute the geometric mean radius of uncertainty ellipses.

The geometric mean radius is computed as the fourth root of the covariance matrix determinant: sqrt(sqrt(det(Cov))) = sqrt(sqrt(C00*C11 - C01^2)). This represents the radius of a circle with the same area as the uncertainty ellipse.

Returns:

Array of geometric mean radii for each track point, or None if uncertainty data is not available

Return type:

Optional[NDArray[np.float64]]

get_times()[source]

Get timestamps for each track point using sensor imagery times.

Matches track frames to sensor imagery frames and returns corresponding timestamps. Returns NaT (Not-a-Time) for frames without matching imagery.

Returns:

Array of timestamps with same length as track, or None if sensor has no imagery times

Return type:

NDArray[np.datetime64] or None

Notes

Uses binary search (searchsorted) for efficient frame matching.

classmethod from_dataframe(df, sensor, name=None)[source]

Create Track from DataFrame with automatic coordinate conversion.

Supports multiple input coordinate systems with automatic conversion: - Frames or Times → Frames (Times require sensor imagery) - Rows/Columns or Lat/Lon/Alt → Rows/Columns (Geodetic requires sensor)

Priority system: Frames > Times, Rows/Columns > Lat/Lon/Alt

Parameters:
  • df (pd.DataFrame) – DataFrame containing track data with required columns based on coordinate system (see Notes)

  • sensor (Sensor) – Sensor object for coordinate conversions

  • name (str, optional) – Track name, by default taken from df[“Track”]

Returns:

New Track object with converted coordinates

Return type:

Track

Raises:

ValueError – If required columns are missing or coordinate conversion fails

Notes

Required columns (one set from each group):

Temporal coordinates (choose one):
  • “Frames” : Frame numbers (preferred)

  • “Times” : Timestamps (requires sensor with imagery times)

Spatial coordinates (choose one):
  • “Rows” and “Columns” : Pixel coordinates (preferred)

  • “Latitude (deg)”, “Longitude (deg)”, “Altitude (km)” : Geodetic coordinates (requires sensor with geolocation capability)

Optional styling columns:
  • “Color”, “Marker”, “Line Width”, “Marker Size”, “Visible”, “Complete”, “Show Line”, “Line Style”, “Tail Length”, “Labels”

property length

Cumulative Euclidean distance along the track path.

Computes the sum of pixel distances between consecutive track points. Result is cached for performance.

Returns:

Total track length in pixels, or 0.0 if track has fewer than 2 points

Return type:

float

copy()[source]

Create a deep copy of this track object.

Returns:

New Track object with copied arrays and styling attributes

Return type:

Track

to_dataframe()[source]

Convert track to DataFrame

Raises:

ValueError: If geolocation/time requested but imagery is missing required data

__init__(name, frames, rows, columns, sensor, color='g', marker='o', line_width=2, marker_size=12, visible=True, tail_length=0, complete=False, show_line=True, line_style='SolidLine', labels=<factory>, extraction_metadata=None, covariance_00=None, covariance_01=None, covariance_11=None, show_uncertainty=False)

Tracker Class

class vista.tracks.tracker.Tracker(name: str, tracks: List[vista.tracks.track.Track])[source]

Bases: object

name: str
tracks: List[Track]
uuid: str = None
__post_init__()[source]

Initialize UUID if not already set

__eq__(other)[source]

Compare Trackers based on UUID

classmethod from_dataframe(name, df, imagery=None)[source]
to_csv(file)[source]
to_dataframe()[source]

Convert all tracks to a DataFrame

Returns:

DataFrame with all tracks’ data

__init__(name, tracks)