metrics#
Metrics to measure the quality of program outputs.
Metrics are to be used in the context of program evaluation and error reporting, where the goal is to compare the outputs of a program to a set of expected outputs to determine the error. They are designed to be flexible and extensible, allowing users to combine them in various ways and define their own metrics.
A number of pre-built metrics are provided, for example
QuantileAbsoluteError
, HighRelativeErrorFraction
, and
SpuriousRate
. Users can also define their own custom metrics using
CustomSingleOutputMetric
or CustomMultiBaselineMetric
.
Suppose we have a SessionProgram
that has one protected input and
produces one output that is a count of the number of rows in the protected input.
>>> class MinimalProgram(SessionProgram):
... class ProtectedInputs:
... protected_df: DataFrame # DataFrame type annotation is required
... class Outputs:
... total_count: DataFrame # required here too
... def session_interaction(self, session: Session):
... count_query = QueryBuilder("protected_df").count()
... budget = self.privacy_budget # session.remaining_privacy_budget also works
... total_count = session.evaluate(count_query, budget)
... return {"total_count": total_count}
We can pass this information to the SessionProgramTuner
class,
which is what gives us access to error reports.
We can measure the error of the program by comparing the program output to
the expected output. Suppose we want to use a built-in metric: the absolute
error AbsoluteError
and a custom metric: root mean squared error.
We need to instantiate the metrics and include them in the list of metric
associated with the metrics
class variable.
>>> protected_df = spark.createDataFrame(pd.DataFrame({"a": [1] * 25}))
>>> def compute_rmse(
... dp_outputs: DataFrame, baseline_outputs: DataFrame
... ):
... total_count_dp = dp_outputs.select("count").collect()[0]["count"]
... total_count_baseline = (
... baseline_outputs.select("count").collect()[0]["count"]
... )
... squared_error = (total_count_dp - total_count_baseline) ** 2
... return math.sqrt(squared_error)
>>> class Tuner(SessionProgramTuner, program=MinimalProgram):
... metrics = [
... AbsoluteError(output="total_count", column="count"),
... CustomSingleOutputMetric(
... func=compute_rmse,
... name="root_mean_squared_error",
... description="Root mean squared error",
... output="total_count",
... ),
... ]
>>> tuner = (
... Tuner.Builder()
... .with_privacy_budget(PureDPBudget(epsilon=1))
... .with_private_dataframe("protected_df", protected_df, AddOneRow())
... .build()
... )
Now that our SessionProgramTuner is initialized, we can get our very first error
report by calling the error_report()
method.
>>> error_report = tuner.error_report()
>>> error_report.dp_outputs["total_count"].show()
+-----+
|count|
+-----+
| 25|
+-----+
>>> error_report.baseline_outputs["default"]["total_count"].show()
+-----+
|count|
+-----+
| 23|
+-----+
>>> error_report.show()
Error report ran with budget PureDPBudget(epsilon=1) and no tunable parameters and no additional parameters
Metric results:
+---------+-------------------------+------------+------------------------------------------------------+
| Value | Metric | Baseline | Description |
+=========+=========================+============+======================================================+
| 2 | abs_err | default | Absolute error for column count of table total_count |
+---------+-------------------------+------------+------------------------------------------------------+
| 2 | root_mean_squared_error | default | Root mean squared error |
+---------+-------------------------+------------+------------------------------------------------------+
See the tutorials starting at Basics of error measurement for more examples on
how to take advantage of metrics
and related classes.
Classes#
Computes the absolute error between two scalar values. |
|
Computes the quantile of the empirical absolute error. |
|
Computes the median absolute error. |
|
Computes the relative error between two scalar values. |
|
Computes the quantile of the empirical relative error. |
|
Computes the median relative error. |
|
Computes the fraction of groups with relative error above a threshold. |
|
Computes the fraction of groups in the DP output but not in the baseline output. |
|
Computes the fraction of groups in the baseline output but not in the DP output. |
|
Wrapper to allow users to define a metric that operates on a single output table. |
|
Wrapper to turn a function into a metric using DP and single baseline’s output. |
|
A generic metric. |
|
An output of a Metric with additional metadata. |
- class AbsoluteError(output, column=None, *, name=None, description=None, baselines=None)#
Bases:
tmlt.analytics.metrics._base.ScalarMetric
Computes the absolute error between two scalar values.
Note
This is only available on a paid version of Tumult Analytics. If you would like to hear more, please contact us at info@tmlt.io.
How it works:
The algorithm takes as input two single-row tables: one representing the differentially private (DP) output and the other representing the baseline output.
DP Table (dp): This table contains the output data generated by a differentially private mechanism.
Baseline Table (baseline): This table contains the output data generated by a non-private or baseline mechanism. It serves as a reference point for comparison with the DP output.
The scalar values are retrieved from these single-row dataframes. Both values are expected to be numeric (either integers or floats). If not, the algorithm raises a
ValueError
.The algorithm computes the absolute error. Absolute error is calculated as the absolute difference between the DP and baseline values using the formula \(abs(dp - baseline)\).
Example
>>> dp_df = spark.createDataFrame(pd.DataFrame({"X": [5]})) >>> dp_outputs = {"O": dp_df} >>> baseline_df = spark.createDataFrame(pd.DataFrame({"X": [6]})) >>> baseline_outputs = {"O": baseline_df}
>>> metric = AbsoluteError(output="O") >>> result = metric.compute_for_baseline(dp_outputs, baseline_outputs) >>> result 1 >>> metric.format(result) '1'
- Parameters
- __init__(output, column=None, *, name=None, description=None, baselines=None)#
Constructor.
- Parameters
column (
str
|None
Optional
[str
] (default:None
)) – The column to compute the absolute error over. If the given output has only one column, this argument may be omitted.name (
str
|None
Optional
[str
] (default:None
)) – A name for the metric.description (
str
|None
Optional
[str
] (default:None
)) – A description of the metric.baselines (
List
[str
] |None
Optional
[List
[str
]] (default:None
)) – The name of the baseline program(s) used for the error report. If None, use all baselines specified as custom baseline and baseline options on tuner class. If no baselines are specified on tuner class, use default baseline. If a string, use only that baseline. If a list, use only those baselines.
- format(value)#
Returns a string representation of this object.
- compute_on_scalar(dp_value, baseline_value)#
Computes metric value from DP and baseline values.
- compute_for_baseline(dp_outputs, baseline_outputs, unprotected_inputs=None, program_parameters=None)#
Returns the metric value given the DP outputs and the baseline outputs.
- Parameters
dp_outputs (Dict[str, pyspark.sql.DataFrame]) –
baseline_outputs (Dict[str, pyspark.sql.DataFrame]) –
unprotected_inputs (Optional[Dict[str, pyspark.sql.DataFrame]]) –
program_parameters (Optional[Dict[str, Any]]) –
- Return type
Any
- property baselines#
Returns the baselines used for the metric.
- __call__(dp_outputs, baseline_outputs, unprotected_inputs=None, program_parameters=None)#
Computes the given metric on the given DP and baseline outputs.
- Parameters
dp_outputs (Dict[str, pyspark.sql.DataFrame]) – The differentially private outputs of the program.
baseline_outputs (Dict[str, Dict[str, pyspark.sql.DataFrame]]) – The outputs of the baseline programs.
unprotected_inputs (Optional[Dict[str, pyspark.sql.DataFrame]]) – Optional public dataframes used in error computation.
program_parameters (Optional[Dict[str, Any]]) – Optional program specific parameters used in error computation.
- Return type
- class QuantileAbsoluteError(output, quantile, measure_column, join_columns, *, name=None, description=None, baselines=None)#
Bases:
tmlt.analytics.metrics._base.JoinedOutputMetric
Computes the quantile of the empirical absolute error.
Note
This is only available on a paid version of Tumult Analytics. If you would like to hear more, please contact us at info@tmlt.io.
How it works:
The algorithm takes as input two tables: one representing the differentially private (DP) output and the other representing the baseline output.
DP Table (dp): This table contains the output data generated by a differentially private mechanism.
Baseline Table (baseline): This table contains the output data generated by a non-private or baseline mechanism. It serves as a reference point for comparison with the DP output.
The algorithm includes error handling to ensure the validity of the input data. It checks for the existence and numeric type of the
measure_column
.The algorithm performs an inner join between the DP and baseline tables based on
join_columns
. This join must be one-to-one, with each row in the DP table matching exactly one row in the baseline table, and vice versa. This ensures that there is a direct correspondence between the DP and baseline outputs for each entity, allowing for accurate comparison.After performing the join, the algorithm computes the absolute error for each group. Absolute error is calculated as the absolute difference between the corresponding values in the DP and baseline outputs using the formula \(abs(dp - baseline)\).
The algorithm then calculates the n-th quantile of the absolute error across all groups.
The algorithm handles cases where the quantile computation may result in an empty column, returning a NaN (not a number) value in such scenarios.
Note
Provided algorithm assumes a one-to-one join scenario.
Nulls in the measure columns are dropped because the metric cannot handle null values, and the absolute error computation requires valid numeric values in both columns.
Example
>>> dp_df = spark.createDataFrame( ... pd.DataFrame( ... { ... "A": ["a1", "a2", "a3"], ... "X": [50, 110, 100] ... } ... ) ... ) >>> dp_outputs = {"O": dp_df} >>> baseline_df = spark.createDataFrame( ... pd.DataFrame( ... { ... "A": ["a1", "a2", "a3"], ... "X": [100, 100, 100] ... } ... ) ... ) >>> baseline_outputs = {"O": baseline_df}
>>> metric = QuantileAbsoluteError( ... output="O", ... quantile=0.5, ... measure_column="X", ... join_columns=["A"] ... ) >>> metric.quantile 0.5 >>> metric.join_columns ['A'] >>> result = metric.compute_for_baseline(dp_outputs, baseline_outputs) >>> result 10.0 >>> metric.format(result) '10.0'
Methods# Returns the quantile.
Returns name of the column to compute the quantile of absolute error over.
Returns a string representation of this object.
Computes quantile absolute error value from combined dataframe.
Returns the name of the run output or view name.
Returns the name of the join columns.
Check if the join keys uniquely identify rows in the joined DataFrame.
Computes metric value.
Returns the name of the metric.
Returns the description of the metric.
Returns the baselines used for the metric.
Computes the given metric on the given DP and baseline outputs.
- Parameters
- __init__(output, quantile, measure_column, join_columns, *, name=None, description=None, baselines=None)#
Constructor.
- Parameters
measure_column (
str
str
) – The column to compute the quantile of absolute error over.quantile (
float
float
) – The quantile to calculate (between 0 and 1).name (
str
|None
Optional
[str
] (default:None
)) – A name for the metric.description (
str
|None
Optional
[str
] (default:None
)) – A description of the metric.baselines (
List
[str
] |None
Optional
[List
[str
]] (default:None
)) – The name of the baseline program(s) used for the error report. If None, use all baselines specified as custom baseline and baseline options on tuner class. If no baselines are specified on tuner class, use default baseline. If a string, use only that baseline. If a list, use only those baselines.
- property measure_column#
Returns name of the column to compute the quantile of absolute error over.
- Return type
- format(value)#
Returns a string representation of this object.
- compute_on_joined_output(joined_output)#
Computes quantile absolute error value from combined dataframe.
- Parameters
joined_output (pyspark.sql.DataFrame) –
- check_join_key_uniqueness(joined_output)#
Check if the join keys uniquely identify rows in the joined DataFrame.
- Parameters
joined_output (pyspark.sql.DataFrame) –
- compute_for_baseline(dp_outputs, baseline_outputs, unprotected_inputs=None, program_parameters=None)#
Computes metric value.
- Parameters
dp_outputs (Dict[str, pyspark.sql.DataFrame]) –
baseline_outputs (Dict[str, pyspark.sql.DataFrame]) –
unprotected_inputs (Optional[Dict[str, pyspark.sql.DataFrame]]) –
program_parameters (Optional[Dict[str, Any]]) –
- property baselines#
Returns the baselines used for the metric.
- __call__(dp_outputs, baseline_outputs, unprotected_inputs=None, program_parameters=None)#
Computes the given metric on the given DP and baseline outputs.
- Parameters
dp_outputs (Dict[str, pyspark.sql.DataFrame]) – The differentially private outputs of the program.
baseline_outputs (Dict[str, Dict[str, pyspark.sql.DataFrame]]) – The outputs of the baseline programs.
unprotected_inputs (Optional[Dict[str, pyspark.sql.DataFrame]]) – Optional public dataframes used in error computation.
program_parameters (Optional[Dict[str, Any]]) – Optional program specific parameters used in error computation.
- Return type
- class MedianAbsoluteError(output, measure_column, join_columns, *, name=None, description=None, baselines=None)#
Bases:
QuantileAbsoluteError
Computes the median absolute error.
Equivalent to
QuantileAbsoluteError
withquantile = 0.5
.Note
This is only available on a paid version of Tumult Analytics. If you would like to hear more, please contact us at info@tmlt.io.
Example
>>> dp_outputs = {"O": dp_df} >>> baseline_df = spark.createDataFrame( ... pd.DataFrame( ... { ... "A": ["a1", "a2", "a3"], ... "X": [100, 100, 100] ... } ... ) ... ) >>> baseline_outputs = {"O": baseline_df}
>>> metric = MedianAbsoluteError( ... output="O", ... measure_column="X", ... join_columns=["A"] ... ) >>> metric.quantile 0.5 >>> metric.join_columns ['A'] >>> result = metric.compute_for_baseline(dp_outputs, baseline_outputs) >>> result 10.0 >>> metric.format(result) '10.0'
Methods# Returns the quantile.
Returns name of the column to compute the quantile of absolute error over.
Returns a string representation of this object.
Computes quantile absolute error value from combined dataframe.
Returns the name of the run output or view name.
Returns the name of the join columns.
Check if the join keys uniquely identify rows in the joined DataFrame.
Computes metric value.
Returns the name of the metric.
Returns the description of the metric.
Returns the baselines used for the metric.
Computes the given metric on the given DP and baseline outputs.
- Parameters
- __init__(output, measure_column, join_columns, *, name=None, description=None, baselines=None)#
Constructor.
- Parameters
measure_column (
str
str
) – The column to compute the median of absolute error over.name (
str
|None
Optional
[str
] (default:None
)) – A name for the metric.description (
str
|None
Optional
[str
] (default:None
)) – A description of the metric.baselines (
List
[str
] |None
Optional
[List
[str
]] (default:None
)) – The name of the baseline program(s) used for the error report. If None, use all baselines specified as custom baseline and baseline options on tuner class. If no baselines are specified on tuner class, use default baseline. If a string, use only that baseline. If a list, use only those baselines.
- property measure_column#
Returns name of the column to compute the quantile of absolute error over.
- Return type
- format(value)#
Returns a string representation of this object.
- compute_on_joined_output(joined_output)#
Computes quantile absolute error value from combined dataframe.
- Parameters
joined_output (pyspark.sql.DataFrame) –
- check_join_key_uniqueness(joined_output)#
Check if the join keys uniquely identify rows in the joined DataFrame.
- Parameters
joined_output (pyspark.sql.DataFrame) –
- compute_for_baseline(dp_outputs, baseline_outputs, unprotected_inputs=None, program_parameters=None)#
Computes metric value.
- Parameters
dp_outputs (Dict[str, pyspark.sql.DataFrame]) –
baseline_outputs (Dict[str, pyspark.sql.DataFrame]) –
unprotected_inputs (Optional[Dict[str, pyspark.sql.DataFrame]]) –
program_parameters (Optional[Dict[str, Any]]) –
- property baselines#
Returns the baselines used for the metric.
- __call__(dp_outputs, baseline_outputs, unprotected_inputs=None, program_parameters=None)#
Computes the given metric on the given DP and baseline outputs.
- Parameters
dp_outputs (Dict[str, pyspark.sql.DataFrame]) – The differentially private outputs of the program.
baseline_outputs (Dict[str, Dict[str, pyspark.sql.DataFrame]]) – The outputs of the baseline programs.
unprotected_inputs (Optional[Dict[str, pyspark.sql.DataFrame]]) – Optional public dataframes used in error computation.
program_parameters (Optional[Dict[str, Any]]) – Optional program specific parameters used in error computation.
- Return type
- class RelativeError(output, column=None, *, name=None, description=None, baselines=None)#
Bases:
tmlt.analytics.metrics._base.ScalarMetric
Computes the relative error between two scalar values.
Note
This is only available on a paid version of Tumult Analytics. If you would like to hear more, please contact us at info@tmlt.io.
How it works:
The algorithm takes as input two single-row tables: one representing the differentially private (DP) output and the other representing the baseline output.
DP Table (dp): This table contains the output data generated by a differentially private mechanism.
Baseline Table (baseline): This table contains the output data generated by a non-private or baseline mechanism. It serves as a reference point for comparison with the DP output.
The scalar values are retrieved from these single-row dataframes. Both values are expected to be numeric (either integers or floats). If not, the algorithm raises a
ValueError
.The algorithm computes the relative error. Relative error is calculated as the absolute difference between the corresponding values in the DP and baseline outputs to the value in the baseline using the formula \(abs(dp - baseline) / baseline\). If baseline is zero, it returns infinity for non-zero differences (\(∞\)) and zero for zero differences (\(0\)).
Example
>>> dp_df = spark.createDataFrame(pd.DataFrame({"A": [5]})) >>> dp_outputs = {"O": dp_df} >>> baseline_df = spark.createDataFrame(pd.DataFrame({"A": [5]})) >>> baseline_outputs = {"O": baseline_df}
>>> metric = RelativeError(output="O") >>> result = metric.compute_for_baseline(dp_outputs, baseline_outputs) >>> result 0.0 >>> metric.format(result) '0.0'
- Parameters
- __init__(output, column=None, *, name=None, description=None, baselines=None)#
Constructor.
- Parameters
column (
str
|None
Optional
[str
] (default:None
)) – The column to compute the relative error over. If the given output has only one column, this argument may be omitted.name (
str
|None
Optional
[str
] (default:None
)) – A name for the metric.description (
str
|None
Optional
[str
] (default:None
)) – A description of the metric.baselines (
List
[str
] |None
Optional
[List
[str
]] (default:None
)) – The name of the baseline program(s) used for the error report. If None, use all baselines specified as custom baseline and baseline options on tuner class. If no baselines are specified on tuner class, use default baseline. If a string, use only that baseline. If a list, use only those baselines.
- format(value)#
Returns a string representation of this object.
- compute_on_scalar(dp_value, baseline_value)#
Computes metric value from DP and baseline values.
- compute_for_baseline(dp_outputs, baseline_outputs, unprotected_inputs=None, program_parameters=None)#
Returns the metric value given the DP outputs and the baseline outputs.
- Parameters
dp_outputs (Dict[str, pyspark.sql.DataFrame]) –
baseline_outputs (Dict[str, pyspark.sql.DataFrame]) –
unprotected_inputs (Optional[Dict[str, pyspark.sql.DataFrame]]) –
program_parameters (Optional[Dict[str, Any]]) –
- Return type
Any
- property baselines#
Returns the baselines used for the metric.
- __call__(dp_outputs, baseline_outputs, unprotected_inputs=None, program_parameters=None)#
Computes the given metric on the given DP and baseline outputs.
- Parameters
dp_outputs (Dict[str, pyspark.sql.DataFrame]) – The differentially private outputs of the program.
baseline_outputs (Dict[str, Dict[str, pyspark.sql.DataFrame]]) – The outputs of the baseline programs.
unprotected_inputs (Optional[Dict[str, pyspark.sql.DataFrame]]) – Optional public dataframes used in error computation.
program_parameters (Optional[Dict[str, Any]]) – Optional program specific parameters used in error computation.
- Return type
- class QuantileRelativeError(output, quantile, measure_column, join_columns, *, name=None, description=None, baselines=None)#
Bases:
tmlt.analytics.metrics._base.JoinedOutputMetric
Computes the quantile of the empirical relative error.
Note
This is only available on a paid version of Tumult Analytics. If you would like to hear more, please contact us at info@tmlt.io.
How it works:
The algorithm takes as input two tables: one representing the differentially private (DP) output and the other representing the baseline output.
DP Table (dp): This table contains the output data generated by a differentially private mechanism.
Baseline Table (baseline): This table contains the output data generated by a non-private or baseline mechanism. It serves as a reference point for comparison with the DP output.
The algorithm includes error handling to ensure the validity of the input data. It checks for the existence and numeric type of the
measure_column
.The algorithm performs an inner join between the DP and baseline tables based on
join_columns
to produce the combined dataframe. This join must be one-to-one, with each row in the DP table matches exactly one row in the baseline table, and vice versa. This ensures that there is a direct correspondence between the DP and baseline outputs for each entity, allowing for accurate comparison.After performing the join, the algorithm computes the relative error for each group. Relative error is calculated as the absolute difference between the corresponding values in the DP and baseline outputs to the value in the baseline using the formula \(abs(dp - baseline) / baseline\). If baseline is zero, it returns infinity for non-zero differences (\(∞\)) and zero for zero differences (\(0\)).
The algorithm then calculates the n-th quantile of the relative error across all groups.
The algorithm handles cases where the quantile computation may result in an empty column, returning a NaN (not a number) value in such scenarios.
Note
Provided algorithm assumes a one-to-one join scenario.
Nulls in the measure columns are dropped because the metric cannot handle null values, and the absolute error computation requires valid numeric values in both columns.
Example
>>> dp_df = spark.createDataFrame( ... pd.DataFrame( ... { ... "A": ["a1", "a2", "a3"], ... "X": [50, 110, 100] ... } ... ) ... ) >>> dp_outputs = {"O": dp_df} >>> baseline_df = spark.createDataFrame( ... pd.DataFrame( ... { ... "A": ["a1", "a2", "a3"], ... "X": [100, 100, 100] ... } ... ) ... ) >>> baseline_outputs = {"O": baseline_df}
>>> metric = QuantileRelativeError( ... output="O", ... quantile=0.5, ... measure_column="X", ... join_columns=["A"] ... ) >>> metric.quantile 0.5 >>> metric.join_columns ['A'] >>> result = metric.compute_for_baseline(dp_outputs, baseline_outputs) >>> result 0.1 >>> metric.format(result) '0.10'
Methods# Returns the quantile.
Returns name of the column to compute the quantile of relative error over.
Returns a string representation of this object.
Computes quantile relative error value from combined dataframe.
Returns the name of the run output or view name.
Returns the name of the join columns.
Check if the join keys uniquely identify rows in the joined DataFrame.
Computes metric value.
Returns the name of the metric.
Returns the description of the metric.
Returns the baselines used for the metric.
Computes the given metric on the given DP and baseline outputs.
- Parameters
- __init__(output, quantile, measure_column, join_columns, *, name=None, description=None, baselines=None)#
Constructor.
- Parameters
quantile (
float
float
) – The quantile to calculate (between 0 and 1).measure_column (
str
str
) – The column to compute the quantile of relative error over.name (
str
|None
Optional
[str
] (default:None
)) – A name for the metric.description (
str
|None
Optional
[str
] (default:None
)) – A description of the metric.baselines (
List
[str
] |None
Optional
[List
[str
]] (default:None
)) – The name of the baseline program(s) used for the error report. If None, use all baselines specified as custom baseline and baseline options on tuner class. If no baselines are specified on tuner class, use default baseline. If a string, use only that baseline. If a list, use only those baselines.
- property measure_column#
Returns name of the column to compute the quantile of relative error over.
- Return type
- format(value)#
Returns a string representation of this object.
- compute_on_joined_output(joined_output)#
Computes quantile relative error value from combined dataframe.
- Parameters
joined_output (pyspark.sql.DataFrame) –
- check_join_key_uniqueness(joined_output)#
Check if the join keys uniquely identify rows in the joined DataFrame.
- Parameters
joined_output (pyspark.sql.DataFrame) –
- compute_for_baseline(dp_outputs, baseline_outputs, unprotected_inputs=None, program_parameters=None)#
Computes metric value.
- Parameters
dp_outputs (Dict[str, pyspark.sql.DataFrame]) –
baseline_outputs (Dict[str, pyspark.sql.DataFrame]) –
unprotected_inputs (Optional[Dict[str, pyspark.sql.DataFrame]]) –
program_parameters (Optional[Dict[str, Any]]) –
- property baselines#
Returns the baselines used for the metric.
- __call__(dp_outputs, baseline_outputs, unprotected_inputs=None, program_parameters=None)#
Computes the given metric on the given DP and baseline outputs.
- Parameters
dp_outputs (Dict[str, pyspark.sql.DataFrame]) – The differentially private outputs of the program.
baseline_outputs (Dict[str, Dict[str, pyspark.sql.DataFrame]]) – The outputs of the baseline programs.
unprotected_inputs (Optional[Dict[str, pyspark.sql.DataFrame]]) – Optional public dataframes used in error computation.
program_parameters (Optional[Dict[str, Any]]) – Optional program specific parameters used in error computation.
- Return type
- class MedianRelativeError(output, measure_column, join_columns, *, name=None, description=None, baselines=None)#
Bases:
QuantileRelativeError
Computes the median relative error.
Equivalent to
QuantileRelativeError
withquantile = 0.5
.Note
This is only available on a paid version of Tumult Analytics. If you would like to hear more, please contact us at info@tmlt.io.
Example
>>> dp_df = spark.createDataFrame( ... pd.DataFrame( ... { ... "A": ["a1", "a2", "a3"], ... "X": [50, 110, 100] ... } ... ) ... ) >>> dp_outputs = {"O": dp_df} >>> baseline_df = spark.createDataFrame( ... pd.DataFrame( ... { ... "A": ["a1", "a2", "a3"], ... "X": [100, 100, 100] ... } ... ) ... ) >>> baseline_outputs = {"O": baseline_df}
>>> metric = MedianRelativeError( ... output="O", ... measure_column="X", ... join_columns=["A"] ... ) >>> metric.quantile 0.5 >>> metric.join_columns ['A'] >>> result = metric.compute_for_baseline(dp_outputs, baseline_outputs) >>> result 0.1 >>> metric.format(result) '0.10'
Methods# Returns the quantile.
Returns name of the column to compute the quantile of relative error over.
Returns a string representation of this object.
Computes quantile relative error value from combined dataframe.
Returns the name of the run output or view name.
Returns the name of the join columns.
Check if the join keys uniquely identify rows in the joined DataFrame.
Computes metric value.
Returns the name of the metric.
Returns the description of the metric.
Returns the baselines used for the metric.
Computes the given metric on the given DP and baseline outputs.
- Parameters
- __init__(output, measure_column, join_columns, *, name=None, description=None, baselines=None)#
Constructor.
- Parameters
measure_column (
str
str
) – The column to compute the median of relative error over.name (
str
|None
Optional
[str
] (default:None
)) – A name for the metric.description (
str
|None
Optional
[str
] (default:None
)) – A description of the metric.baselines (
List
[str
] |None
Optional
[List
[str
]] (default:None
)) – The name of the baseline program(s) used for the error report. If None, use all baselines specified as custom baseline and baseline options on tuner class. If no baselines are specified on tuner class, use default baseline. If a string, use only that baseline. If a list, use only those baselines.
- property measure_column#
Returns name of the column to compute the quantile of relative error over.
- Return type
- format(value)#
Returns a string representation of this object.
- compute_on_joined_output(joined_output)#
Computes quantile relative error value from combined dataframe.
- Parameters
joined_output (pyspark.sql.DataFrame) –
- check_join_key_uniqueness(joined_output)#
Check if the join keys uniquely identify rows in the joined DataFrame.
- Parameters
joined_output (pyspark.sql.DataFrame) –
- compute_for_baseline(dp_outputs, baseline_outputs, unprotected_inputs=None, program_parameters=None)#
Computes metric value.
- Parameters
dp_outputs (Dict[str, pyspark.sql.DataFrame]) –
baseline_outputs (Dict[str, pyspark.sql.DataFrame]) –
unprotected_inputs (Optional[Dict[str, pyspark.sql.DataFrame]]) –
program_parameters (Optional[Dict[str, Any]]) –
- property baselines#
Returns the baselines used for the metric.
- __call__(dp_outputs, baseline_outputs, unprotected_inputs=None, program_parameters=None)#
Computes the given metric on the given DP and baseline outputs.
- Parameters
dp_outputs (Dict[str, pyspark.sql.DataFrame]) – The differentially private outputs of the program.
baseline_outputs (Dict[str, Dict[str, pyspark.sql.DataFrame]]) – The outputs of the baseline programs.
unprotected_inputs (Optional[Dict[str, pyspark.sql.DataFrame]]) – Optional public dataframes used in error computation.
program_parameters (Optional[Dict[str, Any]]) – Optional program specific parameters used in error computation.
- Return type
- class HighRelativeErrorFraction(output, relative_error_threshold, measure_column, join_columns, *, name=None, description=None, baselines=None)#
Bases:
tmlt.analytics.metrics._base.JoinedOutputMetric
Computes the fraction of groups with relative error above a threshold.
Note
This is only available on a paid version of Tumult Analytics. If you would like to hear more, please contact us at info@tmlt.io.
How it works:
The algorithm takes as input two tables: one representing the differentially private (DP) output and the other representing the baseline output.
DP Table (dp): This table contains the output data generated by a differentially private mechanism.
Baseline Table (baseline): This table contains the output data generated by a non-private or baseline mechanism. It serves as a reference point for comparison with the DP output.
The algorithm includes error handling to ensure the validity of the input data. It checks for the existence and numeric type of the
measure_column
.The algorithm performs an inner join between the DP and baseline tables based on
join_columns
to produce the combined dataframe. This join must be one-to-one, with each row in the DP table matches exactly one row in the baseline table, and vice versa. This ensures that there is a direct correspondence between the DP and baseline outputs for each entity, allowing for accurate comparison.After performing the join, the algorithm computes the relative error for each group. Relative error is calculated as the absolute difference between the corresponding values in the DP and baseline outputs to the value in the baseline using the formula \(abs(dp - baseline) / baseline\). If baseline is zero, it returns infinity for non-zero differences (\(∞\)) and zero for zero differences (\(0\)).
Next, the algorithm filters the relative error dataframe to include only those data points where the relative error exceeds a specified threshold (
relative_error_threshold
). This threshold represents the maximum allowable relative error for a data point to be considered within acceptable bounds.Finally, the algorithm then calculates the high relative error fraction by dividing the count of data points with relative errors exceeding the threshold by the total count of data points in the dataframe.
The algorithm handles cases where the resulting dataframe after relative error computation is empty (i.e., it contains no data points), returning a NaN (not a number) value in such scenarios.
Note
Provided algorithm assumes a one-to-one join scenario.
Nulls in the measure columns are dropped because the metric cannot handle null values, and the absolute error computation requires valid numeric values in both columns.
Example
>>> dp_df = spark.createDataFrame( ... pd.DataFrame( ... { ... "A": ["a1", "a2", "a3"], ... "X": [50, 110, 100] ... } ... ) ... ) >>> dp_outputs = {"O": dp_df} >>> baseline_df = spark.createDataFrame( ... pd.DataFrame( ... { ... "A": ["a1", "a2", "a3"], ... "X": [100, 100, 100] ... } ... ) ... ) >>> baseline_outputs = {"O": baseline_df}
>>> metric = HighRelativeErrorFraction( ... output="O", ... measure_column="X", ... relative_error_threshold=0.25, ... join_columns=["A"] ... ) >>> metric.relative_error_threshold 0.25 >>> metric.join_columns ['A'] >>> result = metric.compute_for_baseline(dp_outputs, baseline_outputs) >>> result 0.3333333333333333 >>> metric.format(result) '0.33'
Methods# Returns the relative error threshold.
Returns name of the column to compute the relative error over.
Returns a string representation of this object.
Computes high relative error fraction from combined dataframe.
Returns the name of the run output or view name.
Returns the name of the join columns.
Check if the join keys uniquely identify rows in the joined DataFrame.
Computes metric value.
Returns the name of the metric.
Returns the description of the metric.
Returns the baselines used for the metric.
Computes the given metric on the given DP and baseline outputs.
- Parameters
- __init__(output, relative_error_threshold, measure_column, join_columns, *, name=None, description=None, baselines=None)#
Constructor.
- Parameters
relative_error_threshold (
float
float
) – The threshold for the relative error.measure_column (
str
str
) – The column to compute relative error over.name (
str
|None
Optional
[str
] (default:None
)) – A name for the metric.description (
str
|None
Optional
[str
] (default:None
)) – A description of the metric.baselines (
str
|None
Optional
[str
] (default:None
)) – The name of the baseline program(s) used for the error report. If None, use all baselines specified as custom baseline and baseline options on tuner class. If no baselines are specified on tuner class, use default baseline. If a string, use only that baseline. If a list, use only those baselines.
- property measure_column#
Returns name of the column to compute the relative error over.
- Return type
- format(value)#
Returns a string representation of this object.
- compute_on_joined_output(joined_output)#
Computes high relative error fraction from combined dataframe.
- Parameters
joined_output (pyspark.sql.DataFrame) –
- check_join_key_uniqueness(joined_output)#
Check if the join keys uniquely identify rows in the joined DataFrame.
- Parameters
joined_output (pyspark.sql.DataFrame) –
- compute_for_baseline(dp_outputs, baseline_outputs, unprotected_inputs=None, program_parameters=None)#
Computes metric value.
- Parameters
dp_outputs (Dict[str, pyspark.sql.DataFrame]) –
baseline_outputs (Dict[str, pyspark.sql.DataFrame]) –
unprotected_inputs (Optional[Dict[str, pyspark.sql.DataFrame]]) –
program_parameters (Optional[Dict[str, Any]]) –
- property baselines#
Returns the baselines used for the metric.
- __call__(dp_outputs, baseline_outputs, unprotected_inputs=None, program_parameters=None)#
Computes the given metric on the given DP and baseline outputs.
- Parameters
dp_outputs (Dict[str, pyspark.sql.DataFrame]) – The differentially private outputs of the program.
baseline_outputs (Dict[str, Dict[str, pyspark.sql.DataFrame]]) – The outputs of the baseline programs.
unprotected_inputs (Optional[Dict[str, pyspark.sql.DataFrame]]) – Optional public dataframes used in error computation.
program_parameters (Optional[Dict[str, Any]]) – Optional program specific parameters used in error computation.
- Return type
- class SpuriousRate(output, join_columns, *, name=None, description=None, baselines=None)#
Bases:
tmlt.analytics.metrics._base.SingleBaselineMetric
Computes the fraction of groups in the DP output but not in the baseline output.
Note
This is only available on a paid version of Tumult Analytics. If you would like to hear more, please contact us at info@tmlt.io.
Note
Below, released means that the group is in the DP output, and spurious means that the group is not in the baseline output.
How it works:
The algorithm takes two dictionaries as input:
dp_outputs
: A dictionary containing the differentially private (DP) outputs, where keys represent output identifiers and values represent corresponding DP output. The DP output data is generated by a differentially private mechanism.baseline_outputs
: A dictionary containing the baseline outputs, where keys represent output identifiers and values represent corresponding baseline table (baseline).
Before performing computations, the algorithm checks if the released count of DP output (released count) is zero. If so, it returns NaN, indicating that no computation can be performed due to the absence of released data. If not, the algorithm performs a left anti-join between the DP and baseline tables based on
join_columns
. This returns all rows from the DP output (left dataframe) where there is no match in the baseline output (right dataframe). The count of these rows is the spurious released count.After performing the join, the algorithm computes the spurious rate by dividing the spurious released count by the total count of released data points (released_count), using the formula \(\text{spurious released count} / \text{released count}\). The result represents the proportion of released data points in the DP output that have no corresponding data points in the baseline output.
Example
>>> dp_df = spark.createDataFrame( ... pd.DataFrame( ... { ... "A": ["a1", "a1", "a2", "c"], ... "X": [50, 110, 100, 50] ... } ... ) ... ) >>> dp_outputs = {"O": dp_df} >>> baseline_df = spark.createDataFrame( ... pd.DataFrame( ... { ... "A": ["a1", "a1", "a2", "b"], ... "X": [100, 100, 100, 50] ... } ... ) ... ) >>> baseline_outputs = {"O": baseline_df}
>>> metric = SpuriousRate( ... output="O", ... join_columns=["A"] ... ) >>> metric.join_columns ['A'] >>> metric.compute_for_baseline(dp_outputs, baseline_outputs) 0.25
- Parameters
- __init__(output, join_columns, *, name=None, description=None, baselines=None)#
Constructor.
- Parameters
output (
str
str
) – The output to compute the spurious rate for.name (
str
|None
Optional
[str
] (default:None
)) – A name for the metric.description (
str
|None
Optional
[str
] (default:None
)) – A description of the metric.baselines (
str
|List
[str
] |None
Union
[str
,List
[str
],None
] (default:None
)) – The name of the baseline program(s) used for the error report. If None, use all baselines specified as custom baseline and baseline options on tuner class. If no baselines are specified on tuner class, use default baseline. If a string, use only that baseline. If a list, use only those baselines.
- format(value)#
Returns a string representation of this object.
- check_compatibility_with_program(program, output_views)#
Checks if the metric is compatible with the program.
This is a dynamic check and is performed by verifying whether the
output
attribute of the metric object is present in the annotations of theOutputs
attribute of the program. If theoutput
attribute is not found in the annotations, aValueError
is raised.- Parameters
program (Type[tmlt.analytics.program.SessionProgram]) –
output_views (List[str]) –
- compute_for_baseline(dp_outputs, baseline_outputs, unprotected_inputs=None, program_parameters=None)#
Computes spurious rate given DP and baseline outputs.
- Parameters
dp_outputs (Dict[str, pyspark.sql.DataFrame]) –
baseline_outputs (Dict[str, pyspark.sql.DataFrame]) –
unprotected_inputs (Optional[Dict[str, pyspark.sql.DataFrame]]) –
program_parameters (Optional[Dict[str, Any]]) –
- property baselines#
Returns the baselines used for the metric.
- __call__(dp_outputs, baseline_outputs, unprotected_inputs=None, program_parameters=None)#
Computes the given metric on the given DP and baseline outputs.
- Parameters
dp_outputs (Dict[str, pyspark.sql.DataFrame]) – The differentially private outputs of the program.
baseline_outputs (Dict[str, Dict[str, pyspark.sql.DataFrame]]) – The outputs of the baseline programs.
unprotected_inputs (Optional[Dict[str, pyspark.sql.DataFrame]]) – Optional public dataframes used in error computation.
program_parameters (Optional[Dict[str, Any]]) – Optional program specific parameters used in error computation.
- Return type
- class SuppressionRate(output, join_columns, *, name=None, description=None, baselines=None)#
Bases:
tmlt.analytics.metrics._base.SingleBaselineMetric
Computes the fraction of groups in the baseline output but not in the DP output.
Note
This is only available on a paid version of Tumult Analytics. If you would like to hear more, please contact us at info@tmlt.io.
Note
Below, released means that the group is in the DP output, and spurious means that the group is not in the output of the baseline.
How it works:
The algorithm takes two dictionaries as input:
dp_outputs
: A dictionary containing the differentially private (DP) outputs, where keys represent output identifiers and values represent corresponding DP output. The DP output data is generated by a differentially private mechanism.baseline_outputs
: A dictionary containing the baseline outputs, where keys represent output identifiers and values represent corresponding baseline table (baseline).
Before performing computations, the algorithm checks if the count of baseline output (non-spurious count) is zero. If so, it returns NaN, indicating that no computation can be performed due to the absence of non-spurious data in the baseline outputs. If not, the algorithm performs a left anti-join between the baseline and DP tables based on
join_columns
. This returns all rows from the baseline (left dataframe) where there is no match in the DP output (right dataframe). The count of these rows is the non-spurious non-released count.After performing the join, the algorithm computes the suppression rate by dividing the non-spurious non-released count by the total count of non-spurious data points (non-spurious count), using the formula \(\text{non-spurious non-released count} / \text{non-spurious count}\). The result represents the proportion of non-spurious data points in the baseline outputs that are not released in the DP outputs.
Example
>>> dp_df = spark.createDataFrame( ... pd.DataFrame( ... { ... "A": ["a1", "a1", "a2", "c"], ... "X": [50, 110, 100, 50] ... } ... ) ... ) >>> dp_outputs = {"O": dp_df} >>> baseline_df = spark.createDataFrame( ... pd.DataFrame( ... { ... "A": ["a1", "a1", "a2", "b"], ... "X": [100, 100, 100, 50] ... } ... ) ... ) >>> baseline_outputs = {"O": baseline_df}
>>> metric = SuppressionRate( ... output="O", ... join_columns=["A"] ... ) >>> metric.join_columns ['A'] >>> metric.compute_for_baseline(dp_outputs, baseline_outputs) 0.25
- Parameters
- __init__(output, join_columns, *, name=None, description=None, baselines=None)#
Constructor.
- Parameters
output (
str
str
) – Which output to compute the suppression rate for.name (
str
|None
Optional
[str
] (default:None
)) – A name for the metric.description (
str
|None
Optional
[str
] (default:None
)) – A description of the metric.baselines (
str
|List
[str
] |None
Union
[str
,List
[str
],None
] (default:None
)) – The name of the baseline program(s) used for the error report. If None, use all baselines specified as custom baseline and baseline options on tuner class. If no baselines are specified on tuner class, use default baseline. If a string, use only that baseline. If a list, use only those baselines.
- format(value)#
Returns a string representation of this object.
- check_compatibility_with_program(program, output_views)#
Checks if the metric is compatible with the program.
This is a dynamic check and is performed by verifying whether the
output
attribute of the metric object is present in the annotations of theOutputs
attribute of the program. If theoutput
attribute is not found in the annotations, aValueError
is raised.- Parameters
program (Type[tmlt.analytics.program.SessionProgram]) –
output_views (List[str]) –
- compute_for_baseline(dp_outputs, baseline_outputs, unprotected_inputs=None, program_parameters=None)#
Computes suppression rate given DP and baseline outputs.
- Parameters
dp_outputs (Dict[str, pyspark.sql.DataFrame]) –
baseline_outputs (Dict[str, pyspark.sql.DataFrame]) –
unprotected_inputs (Optional[Dict[str, pyspark.sql.DataFrame]]) –
program_parameters (Optional[Dict[str, Any]]) –
- property baselines#
Returns the baselines used for the metric.
- __call__(dp_outputs, baseline_outputs, unprotected_inputs=None, program_parameters=None)#
Computes the given metric on the given DP and baseline outputs.
- Parameters
dp_outputs (Dict[str, pyspark.sql.DataFrame]) – The differentially private outputs of the program.
baseline_outputs (Dict[str, Dict[str, pyspark.sql.DataFrame]]) – The outputs of the baseline programs.
unprotected_inputs (Optional[Dict[str, pyspark.sql.DataFrame]]) – Optional public dataframes used in error computation.
program_parameters (Optional[Dict[str, Any]]) – Optional program specific parameters used in error computation.
- Return type
- class CustomSingleOutputMetric(func, output, *, name, description=None, baselines=None)#
Bases:
tmlt.analytics.metrics._base.SingleBaselineMetric
Wrapper to allow users to define a metric that operates on a single output table.
Turns a function that calculates error on two dataframes (one DP, one baseline) into a Metric.
Note
This is only available on a paid version of Tumult Analytics. If you would like to hear more, please contact us at info@tmlt.io.
Example
>>> dp_df = spark.createDataFrame(pd.DataFrame({"A": [5]})) >>> dp_outputs = {"O": dp_df} >>> baseline_df = spark.createDataFrame(pd.DataFrame({"A": [5]})) >>> baseline_outputs = {"O": baseline_df}
>>> def size_difference(dp_outputs: DataFrame, baseline_outputs: DataFrame): ... return baseline_outputs.count() - dp_outputs.count()
>>> metric = CustomSingleOutputMetric( ... func=size_difference, ... name="Output size difference", ... description="Difference in number of rows.", ... output="O", ... ) >>> result = metric.compute_for_baseline(dp_outputs, baseline_outputs) >>> result 0 >>> metric.format(result) '0'
- Parameters
- __init__(func, output, *, name, description=None, baselines=None)#
Constructor.
- Parameters
func (
Callable
Callable
) – Function for computing a metric value from DP outputs and a single baseline’s outputs.output (
str
str
) – The output to calculate the metric over. This is required, even if the program produces a single output.description (
str
|None
Optional
[str
] (default:None
)) – A description of the metric.baselines (
str
|List
[str
] |None
Union
[str
,List
[str
],None
] (default:None
)) – The name of the baseline program(s) used for the error report. If None, use all baselines specified as custom baseline and baseline options on tuner class. If no baselines are specified on tuner class, use default baseline. If a string, use only that baseline. If a list, use only those baselines.
- property func#
Returns function to be applied.
- Return type
Callable
- format(value)#
Converts value to human-readable format.
- Parameters
value (Any) –
- check_compatibility_with_program(program, output_views)#
Checks if the metric is compatible with the program.
- Parameters
program (Type[tmlt.analytics.program.SessionProgram]) –
output_views (List[str]) –
- compute_for_baseline(dp_outputs, baseline_outputs, unprotected_inputs=None, program_parameters=None)#
Returns the metric value given the DP outputs and the baseline outputs.
- Parameters
dp_outputs (Dict[str, pyspark.sql.DataFrame]) –
baseline_outputs (Dict[str, pyspark.sql.DataFrame]) –
unprotected_inputs (Optional[Dict[str, pyspark.sql.DataFrame]]) –
program_parameters (Optional[Dict[str, Any]]) –
- property baselines#
Returns the baselines used for the metric.
- __call__(dp_outputs, baseline_outputs, unprotected_inputs=None, program_parameters=None)#
Computes the given metric on the given DP and baseline outputs.
- Parameters
dp_outputs (Dict[str, pyspark.sql.DataFrame]) – The differentially private outputs of the program.
baseline_outputs (Dict[str, Dict[str, pyspark.sql.DataFrame]]) – The outputs of the baseline programs.
unprotected_inputs (Optional[Dict[str, pyspark.sql.DataFrame]]) – Optional public dataframes used in error computation.
program_parameters (Optional[Dict[str, Any]]) – Optional program specific parameters used in error computation.
- Return type
- class CustomMultiBaselineMetric(output, func, *, name, description=None, baselines=None)#
Bases:
tmlt.analytics.metrics._base.MultiBaselineMetric
Wrapper to turn a function into a metric using DP and single baseline’s output.
Note
This is only available on a paid version of Tumult Analytics. If you would like to hear more, please contact us at info@tmlt.io.
Example
>>> dp_df = spark.createDataFrame(pd.DataFrame({"A": [5]})) >>> dp_outputs = {"O": dp_df} >>> baseline_df1 = spark.createDataFrame(pd.DataFrame({"A": [5]})) >>> baseline_df2 = spark.createDataFrame(pd.DataFrame({"A": [6]})) >>> baseline_outputs = { ... "O": {"baseline1": baseline_df1, "baseline2": baseline_df2} ... } >>> _func = lambda dp_outputs, baseline_outputs: { ... output_key: { ... baseline_key: AbsoluteError(output_key).compute_on_scalar( ... dp_output.first().A, baseline_output.first().A ... ) ... for baseline_key, baseline_output ... in baseline_outputs[output_key].items() ... } ... for output_key, dp_output in dp_outputs.items() ... }
>>> metric = CustomMultiBaselineMetric( ... output="O", ... func=_func, ... name="Custom Metric", ... description="Custom Description", ... ) >>> result = metric.compute_for_multiple_baselines(dp_outputs, baseline_outputs) >>> result {'O': {'baseline1': 0, 'baseline2': 1}}
- Parameters
- __init__(output, func, *, name, description=None, baselines=None)#
Constructor.
- Parameters
func (
Callable
Callable
) – Function for computing a metric value from DP outputs and multiple baseline outputs.description (
str
|None
Optional
[str
] (default:None
)) – A description of the metric.baselines (
str
|List
[str
] |None
Union
[str
,List
[str
],None
] (default:None
)) – The name of the baseline program(s) used for the error report. If None, use all baselines specified as custom baseline and baseline options on tuner class. If no baselines are specified on tuner class, use default baseline. If a string, use only that baseline. If a list, use only those baselines.
- property func#
Returns function to be applied.
- Return type
Callable
- format(value)#
Converts value to human-readable format.
- Parameters
value (Any) –
- check_compatibility_with_program(program, output_views)#
Checks if the metric is compatible with the program.
- Parameters
program (Type[tmlt.analytics.program.SessionProgram]) –
output_views (List[str]) –
- compute_for_multiple_baselines(dp_outputs, baseline_outputs, unprotected_inputs=None, program_parameters=None)#
Returns the metric value given the DP and multiple baseline outputs.
- Parameters
dp_outputs (Dict[str, pyspark.sql.DataFrame]) –
baseline_outputs (Dict[str, Dict[str, pyspark.sql.DataFrame]]) –
unprotected_inputs (Optional[Dict[str, pyspark.sql.DataFrame]]) –
program_parameters (Optional[Dict[str, Any]]) –
- compute(dp_outputs, baseline_outputs, unprotected_inputs=None, program_parameters=None)#
Computes the given metric on the given DP and baseline outputs.
The baseline_outputs will already be filtered to only include the baselines that the metric is supposed to use.
- Parameters
dp_outputs (Dict[str, pyspark.sql.DataFrame]) – The differentially private outputs of the program.
baseline_outputs (Dict[str, Dict[str, pyspark.sql.DataFrame]]) – The outputs of the baseline programs, after filtering to only include the baselines that the metric is supposed to use.
unprotected_inputs (Optional[Dict[str, pyspark.sql.DataFrame]]) – Optional public dataframes used in error computation.
program_parameters (Optional[Dict[str, Any]]) – Optional program specific parameters used in error computation.
- Return type
- property baselines#
Returns the baselines used for the metric.
- __call__(dp_outputs, baseline_outputs, unprotected_inputs=None, program_parameters=None)#
Computes the given metric on the given DP and baseline outputs.
- Parameters
dp_outputs (Dict[str, pyspark.sql.DataFrame]) – The differentially private outputs of the program.
baseline_outputs (Dict[str, Dict[str, pyspark.sql.DataFrame]]) – The outputs of the baseline programs.
unprotected_inputs (Optional[Dict[str, pyspark.sql.DataFrame]]) – Optional public dataframes used in error computation.
program_parameters (Optional[Dict[str, Any]]) – Optional program specific parameters used in error computation.
- Return type
- class Metric(name, description, baselines)#
Bases:
abc.ABC
A generic metric.
Note
This is only available on a paid version of Tumult Analytics. If you would like to hear more, please contact us at info@tmlt.io.
- __init__(name, description, baselines)#
Constructor.
- Parameters
baselines (
str
|List
[str
] |None
Union
[str
,List
[str
],None
]) – The name of the baseline program(s) used for the error report. If None, use all baselines specified as custom baseline and baseline options on tuner class. If no baselines are specified on tuner class, use default baseline. If a string, use only that baseline. If a list, use only those baselines.
- property baselines#
Returns the baselines used for the metric.
- abstract format(value)#
Converts value to human-readable format.
- Parameters
value (Any) –
- __call__(dp_outputs, baseline_outputs, unprotected_inputs=None, program_parameters=None)#
Computes the given metric on the given DP and baseline outputs.
- Parameters
dp_outputs (Dict[str, pyspark.sql.DataFrame]) – The differentially private outputs of the program.
baseline_outputs (Dict[str, Dict[str, pyspark.sql.DataFrame]]) – The outputs of the baseline programs.
unprotected_inputs (Optional[Dict[str, pyspark.sql.DataFrame]]) – Optional public dataframes used in error computation.
program_parameters (Optional[Dict[str, Any]]) – Optional program specific parameters used in error computation.
- Return type
- class MetricOutput#
An output of a Metric with additional metadata.
Note
💡 This is only available on a paid version of Tumult Analytics. If you would like to hear more, please contact us at info@tmlt.io.
- name :str#
The name of the metric.
- description :str#
The description of the metric.
- baseline :Union[str, List[str]]#
The name of the baseline program(s) used for the error report.
- metric :Metric#
The metric that was used.
- value :Any#
The value of the metric applied to the program outputs.
- format()#
Returns a string representation of this object.