Using
**only**
PSA results from your model

In a matter of seconds from the SAVI online application you can generate:

- Standardised assessment of uncertainty (C-E planes and CEACs)
- Overall EVPI per patient, per jurisdiction per year and over your decision relevance horizon
- Expected Value of Perfect Parameter Information (EVPPI) for single and groups of parameters

Disclaimer: This application is based on peer-reviewed statistical approximation methods. It comes with no warranty and should be utilised at the user's own risk (see here). The underlying code is made available under the BSD 3-clause license.

**If you use SAVI in your work please cite our paper**

Strong M, Oakley JE, Brennan A.
Estimating multi-parameter partial Expected Value of
Perfect Information from a probabilistic sensitivity analysis sample:
a non-parametric regression approach.
*Medical Decision Making.* 2014;**34(3)**:311-26. Available open access
here.

The SAVI process has 4 steps (using the TABS from left to right)

Step 1: Save PSA input parametes, costs and effects as separate .csv files

Step 2: Input details about your model, then upload and check PSA samples

Step 3: View your VoI analysis

Step 4: Download your results as .csv files. Download a report as a PDF, HTML or word document

Our email address is savi@sheffield.ac.uk

Please supply the PSA samples in the form of three csv files.

SAVI assumes that the first row of the parameter file contains the parameter names.

SAVI assumes that the first row of the costs file holds the decision option names.

Avoid using any special symbols (e.g. currency symbols) in the names.

The first row of the effects file should also hold names, but these names are not used by SAVI.

The csv files must each have the same number of rows, and the rows must correspond, i.e. the parameters in row 1 must be those that correspond to the costs and effects in row 1, and so on.

Costs and effects are assumed to be per-person, and to be absolute rather than incremental (i.e. there must be the same number of columns as decision options, including the baseline decision).

(only the first 5 first rows of each dataset are shown)

**Strategies Compared**

Section 5.1 in Briggs, Claxton & Sculpher. Decision Modelling for Health Economic Evaluation (Handbooks for Health Economic Evaluation). OUP Oxford; 1 edition (2006). ISBN-13: 978-0198526629

A guide to cost-effectiveness acceptability curves. Fenwick & Byford. The British Journal of Psychiatry (2005) 187: 106-108 doi: 10.1192/bjp.187.2.106

This is particularly useful when comparing several strategies because the analyst and decision maker can see in one single measure the expected net value of each strategy, rather than looking at many comparisons of incremental cost-effectiveness ratios between different options. Under the rules of decision theory, the strategy with the greatest expected net benefit is optimal.

Analysis of the expected incremental net benefit helps to visualise whether particular strategies are better than others and how certain a decision maker can be about the differences.

If there are strategies with credible intervals for incremental net benefit that include zero, then there is decision uncertainty. Whether it is valuable to consider further research to reduce uncertainty is the motivation for the value of information calculations. These calculations can consider decision uncertainty arising from all uncertain parameters together (the overall expected value of perfect information – overall EVPI) or for particular sets of uncertain parameters (the expected value of perfect parameter information – EVPPI).

The calculation begins with the existing confidence intervals (or credible intervals) for the model parameters as used in the probabilistic sensitivity analysis. We then imagine a world in which we become absolutely (perfectly) certain about all of the model parameters i.e. the confidence interval for every single parameter is shrunk right down to zero. The decision maker would then be absolutely certain which strategy to select and would choose the one with highest net benefit. One can visualise this idea by imagining that instead of seeing the cloud of dots on the cost-effectiveness plane (representing current uncertainty in costs and benefits) and having to choose, the decision maker now knows exactly which 'dot' is the true value (because all of the uncertainty is removed) and so can be certain to choose the strategy which gives the greatest net benefit. In a two strategy comparison of new versus current care, if the 'true dot' turns out to be below and to the right of the threshold lambda line, then the decision maker would select the new strategy. If the 'true dot' is above and to the left, then current care would be selected. Under the current uncertainty, the decision maker will choose the strategy based on the expected costs and benefits (essentially on whether the 'centre of gravity' of the cloud is above or below the threshold line).

Partial EVPI enables identification of those parameters that contribute particularly highly to decision uncertainty. For each parameter, the expected value of removing current uncertainty is displayed in the table below. The barplot shows parameters in descending order of importance.

Although EVPPI information about individual parameters is useful, often it is more informative if EVPPI can be computed for groups of associated parameters e.g. all parameters associated with efficacy data. This is the maximum expected value of further research that will jointly inform this set of parameters.

First, define groups of parameters for which to calculate EVPPI. Choose a subset of parameters using the tick boxes and press the Calculate EVPPI button.

When calculation of the first parameter group is complete, select a new subset (remember to untick your original choices) and press the Calculate EVPPI button again. This can be repeated for any number of different groups, with all results appearing below on an expanding results table.

For subsets with up to five parameters, the GAM regression method is used. For subsets with five or more parameters the GP regression method is used. See this paper for details.

NOTES

- Currently this table does not automatically update previously calculated EVPPI values when model settings (e.g. lambda) are changed.
- The GP method must invert an n x n matrix where n is the number of rows in the PSA. This is very slow for large matrices, so only the first 7,500 rows of the PSA are used at present

This document contains all the tables and figures generated from the SAVI analysis of your PSA.

NB generating the document can take some time.

This web tool is an R Shiny Server application.

It was written at the University of Sheffield's School of Health and Related Research by Mark Strong, Penny Breeze, Chloe Thomas and Alan Brennan.

The regression-based method for approximating partial EVPI was developed by Mark Strong in collaboration with Jeremy Oakley and Alan Brennan.

The source code is available on GitHub at https://github.com/Sheffield-Accelerated-VoI/SAVI.

Please cite the method as

Strong M, Oakley JE, Brennan A.
Estimating multi-parameter partial Expected Value of
Perfect Information from a probabilistic sensitivity analysis sample:
a non-parametric regression approach.
*Medical Decision Making.* 2014;**34(3)**:311-26. Available open access
here.

Please email us at savi@sheffield.ac.uk

Please tell us about any bugs!

The method for partial EVPI computation that is implemented in this web application arose from independent research supported by the National Institute for Health Research (Mark Strong, postdoctoral fellowship PDF-2012-05-258). The views expressed in this publication are those of the authors and not necessarily those of the National Health Service, the National Institute for Health Research, or the Department of Health.

This website complies with The University of Sheffield's Privacy Policy