vista.tracks.track.Track

class vista.tracks.track.Track(name, frames, rows, columns, sensor, color='g', marker='o', line_width=2, marker_size=12, visible=True, tail_length=0, complete=False, show_line=True, line_style='SolidLine', labels=<factory>, extraction_metadata=None, covariance_00=None, covariance_01=None, covariance_11=None, show_uncertainty=False)[source]

Represents a single object trajectory across multiple frames.

A Track contains temporal position data for a moving object, with support for multiple coordinate systems (pixel, geodetic, time-based) and rich visualization options. Tracks can be created manually, loaded from files, or generated by tracking algorithms.

Parameters:
  • name (str) – Unique identifier for this track

  • frames (NDArray[np.int_]) – Frame numbers where track positions are defined

  • rows (NDArray[np.float64]) – Row (vertical) pixel coordinates for each frame

  • columns (NDArray[np.float64]) – Column (horizontal) pixel coordinates for each frame

  • sensor (Sensor) – Sensor object providing coordinate conversion capabilities

color

Color for track visualization, by default ‘g’ (green)

Type:

str, optional

marker

Marker style for current position (‘o’, ‘s’, ‘t’, ‘d’, ‘+’, ‘x’, ‘star’), by default ‘o’ (circle)

Type:

str, optional

line_width

Width of line connecting track points, by default 2

Type:

int, optional

marker_size

Size of current position marker, by default 12

Type:

int, optional

visible

Whether track is visible in viewer, by default True

Type:

bool, optional

tail_length

Number of previous frames to show (0 = all history), by default 0

Type:

int, optional

complete

If True, show entire track regardless of current frame, by default False

Type:

bool, optional

show_line

Whether to draw line connecting track points, by default True

Type:

bool, optional

line_style

Qt line style (‘SolidLine’, ‘DashLine’, ‘DotLine’, ‘DashDotLine’, ‘DashDotDotLine’), by default ‘SolidLine’

Type:

str, optional

labels

Set of text labels for categorizing/filtering tracks, by default empty set

Type:

set[str], optional

extraction_metadata

Extraction metadata containing image chips and signal detection results. Dictionary with keys: ‘chip_size’ (int), ‘chips’ (NDArray with shape (n_points, diameter, diameter)), ‘signal_masks’ (boolean NDArray with same shape), ‘noise_stds’ (NDArray with shape (n_points,)), by default None

Type:

dict, optional

__getitem__(slice)[source]

Slice track by frame range

get_times()[source]

Get timestamps for each track point using sensor imagery times

from_dataframe(df, sensor, name)[source]

Create Track from pandas DataFrame with coordinate conversion

length()

Property that returns cumulative Euclidean distance along track

copy()[source]

Create a deep copy of the track

to_csv(file)

Save track to CSV file

to_dataframe()[source]

Convert track to pandas DataFrame

Notes

  • Track coordinates can be provided as pixel (row/col) or geodetic (lat/lon/alt)

  • Times can be used instead of frames with automatic conversion via sensor

  • The from_dataframe() method handles coordinate system conversions automatically

  • Track length is computed lazily and cached for performance

__init__(name, frames, rows, columns, sensor, color='g', marker='o', line_width=2, marker_size=12, visible=True, tail_length=0, complete=False, show_line=True, line_style='SolidLine', labels=<factory>, extraction_metadata=None, covariance_00=None, covariance_01=None, covariance_11=None, show_uncertainty=False)

Methods

__init__(name, frames, rows, columns, sensor)

copy()

Create a deep copy of this track object.

from_dataframe(df, sensor[, name])

Create Track from DataFrame with automatic coordinate conversion.

get_brush()

Get cached PyQtGraph brush object for marker fill, creating only if parameters changed.

get_pen([width, style])

Get cached PyQtGraph pen object, creating only if parameters changed.

get_times()

Get timestamps for each track point using sensor imagery times.

get_track_data_at_frame(frame_num)

Get track position at a specific frame using O(1) cached lookup.

get_uncertainty_ellipse_parameters()

Convert covariance matrix to ellipse parameters for visualization.

get_uncertainty_radius()

Compute the geometric mean radius of uncertainty ellipses.

get_visible_indices(current_frame)

Get indices of track points that should be visible at the current frame.

has_uncertainty()

Check if track has uncertainty data.

invalidate_caches()

Invalidate cached data structures when track data changes.

to_dataframe()

Convert track to DataFrame

Attributes

name: str
frames: ndarray[tuple[Any, ...], dtype[int64]]
rows: ndarray[tuple[Any, ...], dtype[float64]]
columns: ndarray[tuple[Any, ...], dtype[float64]]
sensor: Sensor
color: str = 'g'
marker: str = 'o'
line_width: int = 2
marker_size: int = 12
visible: bool = True
tail_length: int = 0
complete: bool = False
show_line: bool = True
line_style: str = 'SolidLine'
labels: set[str]
extraction_metadata: dict | None = None
covariance_00: ndarray[tuple[Any, ...], dtype[float64]] | None = None
covariance_01: ndarray[tuple[Any, ...], dtype[float64]] | None = None
covariance_11: ndarray[tuple[Any, ...], dtype[float64]] | None = None
show_uncertainty: bool = False
uuid: str = None
get_track_data_at_frame(frame_num)[source]

Get track position at a specific frame using O(1) cached lookup.

Parameters:

frame_num (int) – Frame number to query

Returns:

(row, column) coordinates at this frame, or None if frame not in track

Return type:

tuple or None

get_visible_indices(current_frame)[source]

Get indices of track points that should be visible at the current frame.

Parameters:

current_frame (int) – Current frame number

Returns:

Array of indices for visible track points, or None if no points visible

Return type:

NDArray or None

invalidate_caches()[source]

Invalidate cached data structures when track data changes.

get_pen(width=None, style=None)[source]

Get cached PyQtGraph pen object, creating only if parameters changed.

Parameters:
  • width (int, optional) – Line width override, uses self.line_width if None

  • style (str, optional) – Line style override, uses self.line_style if None

Returns:

PyQtGraph pen object

Return type:

pg.mkPen

get_brush()[source]

Get cached PyQtGraph brush object for marker fill, creating only if parameters changed.

Returns:

PyQtGraph brush object

Return type:

pg.mkBrush

has_uncertainty()[source]

Check if track has uncertainty data.

Returns:

True if track has all three covariance matrix elements (C00, C01, C11), False otherwise

Return type:

bool

get_uncertainty_ellipse_parameters()[source]

Convert covariance matrix to ellipse parameters for visualization.

Computes the semi-major axis length, semi-minor axis length, and rotation angle from the 2D covariance matrix at each track point.

Returns:

Tuple of (semi_major_axis, semi_minor_axis, rotation_degrees) arrays, or None if no uncertainty data. Rotation is in degrees, counter-clockwise from horizontal (positive column axis).

Return type:

Optional[tuple[NDArray[np.float64], NDArray[np.float64], NDArray[np.float64]]]

get_uncertainty_radius()[source]

Compute the geometric mean radius of uncertainty ellipses.

The geometric mean radius is computed as the fourth root of the covariance matrix determinant: sqrt(sqrt(det(Cov))) = sqrt(sqrt(C00*C11 - C01^2)). This represents the radius of a circle with the same area as the uncertainty ellipse.

Returns:

Array of geometric mean radii for each track point, or None if uncertainty data is not available

Return type:

Optional[NDArray[np.float64]]

get_times()[source]

Get timestamps for each track point using sensor imagery times.

Matches track frames to sensor imagery frames and returns corresponding timestamps. Returns NaT (Not-a-Time) for frames without matching imagery.

Returns:

Array of timestamps with same length as track, or None if sensor has no imagery times

Return type:

NDArray[np.datetime64] or None

Notes

Uses binary search (searchsorted) for efficient frame matching.

classmethod from_dataframe(df, sensor, name=None)[source]

Create Track from DataFrame with automatic coordinate conversion.

Supports multiple input coordinate systems with automatic conversion: - Frames or Times → Frames (Times require sensor imagery) - Rows/Columns or Lat/Lon/Alt → Rows/Columns (Geodetic requires sensor)

Priority system: Frames > Times, Rows/Columns > Lat/Lon/Alt

Parameters:
  • df (pd.DataFrame) – DataFrame containing track data with required columns based on coordinate system (see Notes)

  • sensor (Sensor) – Sensor object for coordinate conversions

  • name (str, optional) – Track name, by default taken from df[“Track”]

Returns:

New Track object with converted coordinates

Return type:

Track

Raises:

ValueError – If required columns are missing or coordinate conversion fails

Notes

Required columns (one set from each group):

Temporal coordinates (choose one):
  • “Frames” : Frame numbers (preferred)

  • “Times” : Timestamps (requires sensor with imagery times)

Spatial coordinates (choose one):
  • “Rows” and “Columns” : Pixel coordinates (preferred)

  • “Latitude (deg)”, “Longitude (deg)”, “Altitude (km)” : Geodetic coordinates (requires sensor with geolocation capability)

Optional styling columns:
  • “Color”, “Marker”, “Line Width”, “Marker Size”, “Visible”, “Complete”, “Show Line”, “Line Style”, “Tail Length”, “Labels”

property length

Cumulative Euclidean distance along the track path.

Computes the sum of pixel distances between consecutive track points. Result is cached for performance.

Returns:

Total track length in pixels, or 0.0 if track has fewer than 2 points

Return type:

float

copy()[source]

Create a deep copy of this track object.

Returns:

New Track object with copied arrays and styling attributes

Return type:

Track

to_dataframe()[source]

Convert track to DataFrame

Raises:

ValueError: If geolocation/time requested but imagery is missing required data

__init__(name, frames, rows, columns, sensor, color='g', marker='o', line_width=2, marker_size=12, visible=True, tail_length=0, complete=False, show_line=True, line_style='SolidLine', labels=<factory>, extraction_metadata=None, covariance_00=None, covariance_01=None, covariance_11=None, show_uncertainty=False)