We introduce a framework for constructing confidence intervals for the performance of a system as a function of a parameter, decision variable or system state, even when the system is not simulated at the particular parameter, decision variable or state. The proposed methods leverage observations from some other simulated model instances and known functional properties of the performance function being evaluated. The intervals, termed p lausible intervals, deliver a desired coverage probability uniformly over all model instances as the minimum sample size at the simulated model instances increases, and they attain the strongest possible consistency from simulating a finite number of model instances. We illustrate the versatility and effectiveness of plausible intervals through two numerical experiments.