Google OR-Tools v9.11
a fast and portable software suite for combinatorial optimization
Loading...
Searching...
No Matches
operations_research::pdlp::ShardedWeightedAverage Class Reference

#include <sharded_optimization_utils.h>

Public Member Functions

 ShardedWeightedAverage (const Sharder *sharder)
 
 ShardedWeightedAverage (ShardedWeightedAverage &&)=default
 
ShardedWeightedAverageoperator= (ShardedWeightedAverage &&)=default
 
void Add (const Eigen::VectorXd &datapoint, double weight)
 
void Clear ()
 Clears the sum to zero, i.e., just constructed.
 
bool HasNonzeroWeight () const
 
double Weight () const
 Returns the sum of the weights of the datapoints added so far.
 
Eigen::VectorXd ComputeAverage () const
 
int NumTerms () const
 

Detailed Description

This computes weighted averages of vectors. It satisfies the following: if all the averaged vectors have component i equal to x then the average has component i exactly equal to x, without any floating-point roundoff. In practice the above is probably still true with "equal to x" replaced with "at least x" or "at most x". However unrealistic counter examples probably exist involving a new item with weight 10^15 times greater than the total weight so far.

Definition at line 39 of file sharded_optimization_utils.h.

Constructor & Destructor Documentation

◆ ShardedWeightedAverage() [1/2]

operations_research::pdlp::ShardedWeightedAverage::ShardedWeightedAverage ( const Sharder * sharder)
explicit

Initializes the weighted average by creating a vector sized according to the number of elements in sharder. Retains the pointer to sharder, so sharder must outlive this object.

Definition at line 43 of file sharded_optimization_utils.cc.

◆ ShardedWeightedAverage() [2/2]

operations_research::pdlp::ShardedWeightedAverage::ShardedWeightedAverage ( ShardedWeightedAverage && )
default

Member Function Documentation

◆ Add()

void operations_research::pdlp::ShardedWeightedAverage::Add ( const Eigen::VectorXd & datapoint,
double weight )

Adds datapoint to the average weighted by weight. CHECK-fails if weight is negative.

We considered the five averaging algorithms M_* listed on the first page of https://www.jstor.org/stable/2286154 and the Kahan summation algorithm (https://en.wikipedia.org/wiki/Kahan_summation_algorithm). Of these only M_14 satisfies our desired property that a constant sequence is averaged without roundoff while requiring only a single vector be stored. We therefore use M_14 (actually a natural weighted generalization, see below).

This if protects against NaN if sum_weights_ also == 0.0.

Definition at line 54 of file sharded_optimization_utils.cc.

◆ Clear()

void operations_research::pdlp::ShardedWeightedAverage::Clear ( )

Clears the sum to zero, i.e., just constructed.

Definition at line 68 of file sharded_optimization_utils.cc.

◆ ComputeAverage()

VectorXd operations_research::pdlp::ShardedWeightedAverage::ComputeAverage ( ) const

Computes the weighted average of the datapoints added so far, i.e., sum_i weight[i] * datapoint[i] / sum_i weight[i]. The results are set to zero if HasNonzeroWeight() is false.

Todo
(user): consider returning a reference to avoid this copy.

Definition at line 74 of file sharded_optimization_utils.cc.

◆ HasNonzeroWeight()

bool operations_research::pdlp::ShardedWeightedAverage::HasNonzeroWeight ( ) const
inline

Returns true if there is at least one term in the average with a positive weight.

Definition at line 58 of file sharded_optimization_utils.h.

◆ NumTerms()

int operations_research::pdlp::ShardedWeightedAverage::NumTerms ( ) const
inline

Definition at line 68 of file sharded_optimization_utils.h.

◆ operator=()

ShardedWeightedAverage & operations_research::pdlp::ShardedWeightedAverage::operator= ( ShardedWeightedAverage && )
default

◆ Weight()

double operations_research::pdlp::ShardedWeightedAverage::Weight ( ) const
inline

Returns the sum of the weights of the datapoints added so far.

Definition at line 61 of file sharded_optimization_utils.h.


The documentation for this class was generated from the following files: