Population adjustment methods such as matching-adjusted indirect comparison (MAIC) are increasingly used to compare marginal treatment effects when there are cross-trial differences in effect modifiers and limited patient-level data. MAIC is based on propensity score weighting, which is sensitive to poor covariate overlap because of its inability to extrapolate. Current regression adjustment methods can extrapolate beyond the observed covariate space but target conditional treatment effects. This is problematic when the measure of effect is non-collapsible. To overcome these limitations, we develop a novel method based on multiple imputation called predictive-adjusted indirect comparison (PAIC). The novelty of PAIC is that it is a regression adjustment method that targets marginal treatment effects. It proceeds by splitting the adjustment into two separate stages: the generation of synthetic datasets and their analysis. We compare two versions of PAIC to MAIC in a comprehensive simulation study of 162 scenarios. This simulation study is based on binary outcomes and binary covariates and uses the log-odds ratio as the measure of effect. The simulation scenarios vary the trial sample size, prognostic variable effects, interaction effects, covariate correlations and covariate overlap. Generally, both PAIC and MAIC yield unbiased treatment effect estimates and valid coverage rates. In the simulations, PAIC provides more precise and more accurate estimates than MAIC, particularly when overlap is poor. MAIC and PAIC use different adjustment mechanisms and considering their results jointly may be helpful to evaluate the robustness of analyses.