Blinded Sample Size Re-estimation (SSR)
Technical documentation for blinded interim sample size re-estimation in adaptive clinical trials. This page covers nuisance parameter re-estimation from pooled data, the Kieser–Friede framework, conditional power, sensitivity analysis, regulatory alignment, and Monte Carlo validation.
Contents
1. Overview & Motivation
A clinical trial's initial sample size depends on assumptions about nuisance parameters—the variance for continuous endpoints or the pooled event rate for binary endpoints. If these assumptions are wrong, the trial may be underpowered or wastefully overpowered.
Blinded sample size re-estimation (SSR) addresses this by re-estimating nuisance parameters from pooled (blinded) interim data and adjusting the target sample size accordingly. Because treatment assignments are never revealed, blinding is preserved and Type I error is controlled under mild regularity conditions.
Key advantage: The FDA classifies blinded SSR as a “well-understood” adaptation that does not require complex statistical methodology for Type I error control (FDA Guidance 2019, Section IV.B.2). This makes it the lowest-risk adaptive design option.
2. Theoretical Foundation
Continuous Endpoints
For a two-arm trial comparing means with common variance and treatment effect , the required sample size per arm is:
where and . The treatment effect is assumed known (the minimally clinically important difference), while is the nuisance parameter to be re-estimated.
At the interim analysis with information fraction , the pooled sample variance is computed from all patients without unblinding:
This estimate is positively biased by (Kieser & Friede 2003), but the bias is conservative—it leads to slight overestimation of the required sample size, which protects power.
Binary Endpoints
For binary endpoints with control rate and treatment rate , the sample size per arm uses the pooled-variance normal approximation (Fleiss, Levin & Paik):
where is the pooled rate. At interim, the observed blinded pooled rate replaces the planned while the planned effect size is maintained (Friede & Kieser 2004).
Survival (Time-to-Event) Endpoints
For event-driven trials comparing survival curves with hazard ratio and allocation ratio (treatment:control), the required number of events is given by the Schoenfeld (1981) formula:
The number of events is the primary quantity. To convert to a sample size , compute the expected event probability under an exponential model with uniform accrual over months and total study duration :
where is the median survival in the control arm. At interim, the blinded pooled event rate is re-estimated from observed events across both arms. Because events are fixed, only changes.
Why events, not patients? In time-to-event designs, statistical power depends on the number of events, not the number of patients enrolled. The Schoenfeld formula fixes ; the sample size is derived and can be updated at interim without affecting the event target.
Why only nuisance parameters? Blinded data cannot distinguish treatment from control, so only parameters that are estimable from pooled data can be updated. The treatment effect (or for survival) remains fixed at the planned value.
3. Recalculation Algorithm
The blinded SSR procedure follows these steps:
Initial Sample Size
Compute using the planned nuisance parameter ( or ). For survival, compute events via Schoenfeld, then .
Interim Look
At information fraction , patients have been enrolled.
Blinded Re-estimation
Compute (or ) from pooled interim data without breaking the blind. For survival, re-estimate the blinded pooled event rate from observed events.
Recalculate
Recompute the required sample size using the re-estimated nuisance parameter while keeping the planned effect size fixed. For survival, — events stays fixed.
Constrain
Apply the interim floor (, cannot un-enroll patients) and the protocol cap (, where is the maximum inflation factor, typically 1.5–2.0). Enforce even parity for equal allocation.
Continue
Enroll remaining patients to the adjusted target and perform the final analysis using the standard z-test.
Constraint priority: Interim floor > Cap > Even parity. When the cap is binding, per-arm count is rounded down to respect the protocol limit. When the interim floor is binding, per-arm count is rounded up to accommodate already-enrolled patients.
4. Conditional Power
After re-estimation, the conditional power quantifies the probability of rejecting at the final analysis, given the interim data and the adjusted sample size:
where is the ratio of final to interim sample size and is the expected interim z-statistic under the assumed treatment effect:
Continuous
Binary
Survival
Survival note: For time-to-event endpoints, the information ratio uses events: (not ).
Edge cases: When (no additional recruitment, e.g., after the interim floor binds), CP reduces to —the probability of rejection based on the interim z alone.
5. Sensitivity Analysis
The calculator generates a sensitivity table showing how the recalculated sample size responds to different nuisance parameter values. This supports protocol planning and DMC communication.
Continuous Endpoints
The observed variance is varied as multiples of the planned variance: 50%, 75%, 100% (planned), 125%, 150%, and 200%. For each scenario, the recalculated N, inflation factor, and conditional power are reported.
Binary Endpoints
The pooled event rate is varied in offsets of −0.10, −0.05, 0.00 (planned), +0.05, and +0.10, clamped to [0.01, 0.99]. Each scenario reports the resulting sample size under the updated nuisance parameter.
Survival Endpoints
The observed event rate is varied as multiples of the planned event probability: 0.5×, 0.75×, 1.0× (planned), 1.25×, 1.5×, and 2.0×. Lower event rates require more patients to achieve the same number of events, so N increases.
Protocol tip: Include the sensitivity table in the SAP appendix to demonstrate that the pre-specified cap accommodates plausible variance inflation scenarios.
6. Statistical Assumptions
Allocation: 1:1 randomization by default for continuous and binary endpoints. Survival endpoints support unequal allocation via (treatment:control ratio).
Common variance: For continuous endpoints, the variance is assumed equal across both arms (homoscedasticity).
Fixed effect size: The treatment effect , , or remains at the planned value—only the nuisance parameter is re-estimated.
Blinding preserved: The re-estimation uses only pooled data. Treatment assignments are not accessed.
Normal approximation: The final test is a z-test (continuous/binary) or the logrank test (survival). For binary endpoints, adequate sample size per arm is needed for the normal approximation to hold (typically and ).
Exponential survival model: For survival endpoints, event times follow an exponential distribution (proportional hazards) with uniform accrual over the accrual period.
Single interim look: One pre-specified interim analysis for re-estimation. Multiple re-estimations require additional considerations.
7. Limitations & When Not to Use
Blinded variance bias: The pooled variance is biased upward by . This is conservative but may lead to unnecessary sample size increases when the true effect is large.
Early interim risk: If , the variance estimate may be unstable due to small sample size. The calculator enforces .
Cannot detect effect size misspecification: Blinded SSR only re-estimates nuisance parameters. If the treatment effect was overestimated at planning, the trial may still be underpowered. Consider unblinded SSR if effect size uncertainty is the primary concern.
Exponential model assumption: Survival SSR assumes exponential event times and proportional hazards. Non-proportional hazards, cure-rate models, or complex censoring patterns may require external simulation.
Unequal allocation (continuous/binary): Unequal allocation ratios are supported for survival endpoints only. Continuous and binary endpoints assume 1:1 randomization.
8. Regulatory Considerations
Blinded SSR is one of the most regulatory-friendly adaptive designs. The FDA Guidance on Adaptive Designs (2019) explicitly acknowledges that blinded re-estimation preserves Type I error under standard conditions.
Documentation Checklist
Pre-specify the interim timing (information fraction) and the re-estimation procedure in the protocol and SAP.
Define the maximum sample size cap () and justify it based on feasibility and budget constraints.
Specify that only nuisance parameters (variance or pooled rate) will be re-estimated; the treatment effect remains fixed.
Include the sensitivity analysis table showing recalculated N under various nuisance parameter scenarios.
Confirm that blinding is maintained during the re-estimation and that the independent statistician/DMC oversees the process.
Automated Warnings
The calculator generates context-specific regulatory notes:
Substantial increase (>50%): Flags impact on trial feasibility, budget, and timeline.
Cap binding: Notes that conditional power may fall below target; cap justification required in the protocol.
Deflated estimate: When the re-estimated size is notably smaller (<80% of planned), warns about potential early high-responder cohort bias.
9. Monte Carlo Validation
The calculator supports Tier 2 simulation validation through the Adaptive Core engine. Monte Carlo simulations independently verify the analytical results by:
- Generating interim data under the true parameters
- Computing the blinded variance (or pooled rate) estimate
- Recalculating the sample size with the same algorithm
- Generating remaining data to the adjusted target
- Performing the final z-test
- Repeating 1,000–100,000 times
Reported Metrics
Type I Error
Rejection rate under (true effect = 0). Should be .
Empirical Power
Rejection rate under . Should approximate the analytical conditional power.
Final N Distribution
Mean, median, Q25, Q75, min, and max of the final sample size across simulations.
Discordance Check
If simulated power deviates from analytical by >3%, a warning is raised.
Reproducibility: Every simulation run is seeded (via ) and the seed is stored alongside results. Re-running with the same seed produces identical output.
10. API Reference
POST /api/v1/calculators/ssr-blinded
Computes blinded sample size re-estimation with optional Monte Carlo simulation validation.
Request Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
| endpoint_type | string | "continuous" | "continuous", "binary", or "survival" |
| alpha | float | 0.025 | One-sided significance level (0, 1) |
| power | float | 0.90 | Target power (0.5, 1) |
| mean_difference | float | 0.3 | Treatment effect for continuous endpoints (>0) |
| initial_variance | float | 1.0 | Planned variance for continuous endpoints (>0) |
| control_rate | float? | null | Control arm event rate for binary endpoints (0, 1) |
| treatment_rate | float? | null | Treatment arm event rate (0, 1); must be > control_rate |
| interim_fraction | float | 0.5 | Information fraction at interim (0.1, 0.9) |
| n_max_factor | float | 2.0 | Maximum inflation factor [1.0, 5.0] |
| observed_variance | float? | null | Observed blinded variance (continuous, >0) |
| observed_pooled_rate | float? | null | Observed blinded pooled rate (binary, (0,1)) |
| hazard_ratio | float? | null | Assumed HR (survival, (0,2), ≠1) |
| median_control | float? | null | Median control survival in months (>0) |
| accrual_time | float? | null | Accrual period in months (>0) |
| follow_up_time | float? | null | Follow-up after accrual in months (≥0) |
| dropout_rate | float | 0.0 | Annual dropout rate [0, 1) |
| allocation_ratio | float | 1.0 | Randomization ratio treatment:control (>0, survival) |
| observed_event_rate | float? | null | Blinded pooled event rate at interim (survival, (0,1)) |
| simulate | bool | false | Enable Monte Carlo simulation tier |
| simulation_seed | int? | null | Seed for reproducibility; auto-generated if omitted |
| n_simulations | int | 10000 | Number of simulations [1000, 100000] |
Example Request
{
"endpoint_type": "continuous",
"alpha": 0.025,
"power": 0.90,
"mean_difference": 0.3,
"initial_variance": 1.0,
"interim_fraction": 0.5,
"n_max_factor": 2.0,
"observed_variance": 1.4,
"simulate": true,
"n_simulations": 10000
}Example Request (Survival Endpoint)
{
"endpoint_type": "survival",
"alpha": 0.025,
"power": 0.90,
"hazard_ratio": 0.7,
"median_control": 12,
"accrual_time": 24,
"follow_up_time": 12,
"allocation_ratio": 1.0,
"interim_fraction": 0.5,
"n_max_factor": 2.0,
"observed_event_rate": 0.55,
"simulate": true,
"n_simulations": 10000
}Response Fields
| Field | Description |
|---|---|
| initial_n_per_arm | Sample size per arm before re-estimation |
| recalculated_n_per_arm | Sample size per arm after re-estimation |
| inflation_factor | Ratio of recalculated to initial total N |
| conditional_power | Conditional power at the adjusted sample size |
| n_capped | Whether the cap was binding |
| recalculation_scenarios | Sensitivity table with 5–6 scenarios |
| regulatory_notes | Context-specific regulatory guidance |
| events_required | Required events d from Schoenfeld formula (survival only) |
| event_probability | Planned weighted event probability (survival only) |
| observed_event_probability | Re-estimated event probability at interim (survival only) |
| initial_n_control | Initial control arm N (survival with allocation_ratio) |
| initial_n_treatment | Initial treatment arm N (survival with allocation_ratio) |
| recalculated_n_control | Recalculated control arm N (survival) |
| recalculated_n_treatment | Recalculated treatment arm N (survival) |
11. Technical References
- Kieser M, Friede T. Simple procedures for blinded sample size adjustment that do not affect the type I error rate. Statistics in Medicine. 2003;22(23):3571–3581.
- Friede T, Kieser M. Sample size recalculation for binary data in internal pilot study designs. Pharmaceutical Statistics. 2004;3(4):269–279.
- Zucker DM, Wittes JT. The bias and efficiency of blinded variance estimates in clinical trials. Statistics in Medicine. 2004;23(4):565–574.
- FDA. Adaptive Designs for Clinical Trials of Drugs and Biologics: Guidance for Industry. 2019. Section IV.B.2.
- EMA. Reflection Paper on Methodological Issues in Confirmatory Clinical Trials Planned with an Adaptive Design. CHMP/EWP/2459/02. 2007.
- Cui L, Hung HMJ, Wang SJ. Modification of sample size in group sequential clinical trials. Biometrics. 1999;55(3):853–857.
- Fleiss JL, Levin B, Paik MC. Statistical Methods for Rates and Proportions. 3rd ed. Wiley; 2003.
- Schoenfeld D. The asymptotic properties of nonparametric tests for comparing survival distributions. Biometrika. 1981;68(1):316–319.
- Gould AL. Interim analyses for monitoring clinical trials that do not materially affect the type I error rate. Statistics in Medicine. 1992;11(1):55–66.
- Friede T, et al. Blinded sample size re-estimation in event-driven clinical trials. Pharmaceutical Statistics. 2019;18(5):578–588.
Related Documentation
Unblinded SSR
When the treatment effect is uncertain, unblinded SSR uses the promising zone approach to adjust sample size based on observed efficacy.
Complete Guide to SSR
Practitioner guide with blinded vs. unblinded decision framework, worked examples, SAP language, and R code.
Group Sequential Design
Interim monitoring with early stopping rules. GSD and SSR are complementary tools for adaptive trial design.