Collaborative Causal Inference with Fair Incentives

Collaborative Causal Inference with Fair Incentives

Causal inference estimates the causal effect of treatment variables for some target population(s) and is widely adopted across various fields. In healthcare, hospitals perform causal inference of the efficacy of drugs. For example, physicians would gather realistic data on the patient’s recovery rate about various types of medications for the same disease, and use these data to find out which type of medication is more effective, according to how much each medication can improve the recovery rate compared to a placebo. In agriculture, causal inference is used to determine the achievable growth from applying a particular type of nutrient in a similar manner.

However, the collected data can be deficient and non-representative for the population of interest, degrading the accuracy of the estimated treatment effect. In real life, patients prefer to visit the hospitals nearby, which results in each hospital having few, geographically biased, and demographically biased records that give rise to data deficiency and non-representativeness. If the hospitals perform causal inference individually, then they are likely to get inaccurate treatment effect estimates and potentially fail to prescribe the most efficacious medications, which is undesirable!

To resolve the above issues of the data for causal inference, we can resort to collaborations! Collaborative causal inference uses the aggregation of shared data from participating parties (e.g., company, organization, or individual) to overcome the deficiency and non-representativeness of the individual data. Consequently, they obtain more accurate and statistically significant treatment effect estimates. Such high-quality estimates for medical treatments help doctors improve their prescriptions and benefit the patients and society.

However, collaborations rely on the willingness of all parties to share their data, which is not always the case. In practice, parties are often self-interested and unwilling to share their valuable and proprietary source data in the collaboration because (1) the process of data collection, data processing, and data storage is costly; (2) the parties are competing with each other and fighting for the market shares. So, some parties may consider it unfair if the others with less valuable data can benefit equally from the collaboration as themselves, causing a “free-rider” problem. If many parties feel the lack of incentives and refuse to participate, the collaboration cannot be formed in the first place. This motivates the need to promote collaboration among self-interested parties with guaranteed:

  1. benefit: by joining the collaboration, the parties are guaranteed to do causal inference at least as well as before;
  2. fairness: the parties are rewarded fairly according to their contribution to the collaboration.

In this work [1], we present a game-theoretic reward scheme to incentivize the collaboration of multiple self-interested parties for causal inference by fairly rewarding them with more valuable treatment effect estimates. Our methodology consists of three steps:

  1. Data Valuation: How much does a dataset value for causal inference?
  2. Reward Valuation: How much reward should we give to each party according to their tr?
  3. Reward Realization: How to realize the reward in practice?

 

Data valuation: Data valuation is the key building block of the proposed collaborative causal inference with fairness consideration. It is a quantitative measure of the value of the causal inference data. A key challenge here is that causal inference estimators need to be both accurate and statistically significant to be useful in practice. Here, the accuracy refers to the error between the estimated treatment effect and the ground truth. The statistical significance refers to how confident we are about the estimation. An accurate estimate with low statistical significance may only be accurate by chance and unreliable. Thus, both aspects need to be considered in the data valuation approach. We first use the treatment effect estimate obtained by utilizing the data aggregated from all collaborating parties as the surrogate to the “ground truth”. As this is the best estimate we can obtain in practice, this “ground truth” is also the most valuable estimate. Then, as each treatment effect estimate approximately follows the normal distribution by the Central Limit Theorem, we propose to value each dataset by the negative reverse Kullback-Leibler (KL) divergence between its resulting treatment effect estimate vs. the ground truth surrogate. Note that since KL divergence is asymmetric, its direction matters here. We choose to use reverse KL instead of forward KL because forward KL overly punishes overconfidence (having a smaller variance) of the estimate, which is undesirable. Equipped with the data valuation function, we have the ability to not only value each party’s dataset, but also a coalition of datasets from multiple parties. For example, the estimate obtained using the datasets of all parties will have the maximum data value as such estimate is the same as the “ground truth” with no divergence.

Reward Valuation: The reward value is a simple proxy to the actual reward received by the participating parties. It needs to satisfy certain desirable incentive criteria to encourage parties to join the collaboration. We highlight a non-exhaustive list of the incentive criteria here:

  1. Individual rationality: parties are guaranteed to perform causal inference at least as well as when without collaboration, otherwise they will not participate due to receiving worse estimates.
  2. Fairness: parties contributing more valuable datasets should receive more valuable rewards to avoid the free-rider problem.
  3. Efficiency: reward values should be maximized as much as possible such that at least one party is rewarded with an estimate with the best achievable quality.

We design the reward value based on the Shapley value, a concept from cooperative game theory for a fair allocation of rewards among the parties. It can be shown that our reward value satisfies the full list of incentive criteria under mild conditions. In this case, the parties are more motivated to join the collaboration because they are guaranteed to perform better and be rewarded fairly based on their contribution.

Reward Realization: Finally, we need to realize the reward value to each party with a new treatment effect estimate and the corresponding confidence interval according to its fair reward value. There are a few additional considerations here:

  • Fidelity: the rewarded treatment effect estimate should not provide wrong information about the basic question of whether the treatment is effective. Regarding a non-effective treatment as effective is likely to have negative consequences (e.g., prescribing a non-effective medication).
  • Information Obscurity: the knowledge of the “ground-truth” treatment effect estimate should be obscured so that the parties cannot exploit the reward scheme and infer more valuable rewards, defeating the purpose of our scheme.

To achieve these additional criteria, we propose a stochastic reward realization strategy with rejection sampling that perturbs the ground truth estimate according to the reward value. The random perturbation obscures the “ground truth”, and the rejection sampling prevents rewarding parties with estimates that lack fidelity.

We empirically simulate the collaborative causal inference scenario with the proposed framework on multiple parties. Consider that the group welfare is the sum of the reward values of all parties. We show that our framework with fairness guarantee can achieve significant improvement in terms of welfare compared to the non-collaborative case (all parties work individually), while incurring a small cost in welfare compared to the collaborative case without fairness (all parties are rewarded equally with the most valuable estimate). Thus, there is a trade-off between fairness and total group welfare, because enforcing a fair reward scheme based on the contribution results in giving out dissimilar treatment effect estimates among parties.

One ethical concern is why should we withhold the causal knowledge of the treatment. Won’t it be more beneficial to the whole society if everyone just gets the same most valuable estimate? We actually think so too, but provided that we can. As argued previously, without the fairness incentive, the collaboration may not be formed in the first place, not be mention the existence of the most valuable knowledge. Our scheme does not aim to conceal knowledge discovery, but rather, we offer an alternative path towards improving the group welfare by encouraging the establishment of more collaborations that did not exist before. The scheme is far from perfect and certainly requires continued refinement.

In summary, we propose a novel collaborative causal inference framework that incentivizes the collaboration of self-interested parties for causal inference by fairly rewarding them with more valuable treatment effect estimates. The framework consists of (1) a causal inference data valuation function using the negative reverse KL divergence towards the target estimate, (2) a reward scheme based on Shapley-fair reward value to satisfy desirable incentive criteria, and (3) a stochastic reward realization strategy based on rejection sampling.

Reference

[1] Rui Qiao, Xinyi Xu, and Bryan Kian Hsiang Low (2023). Collaborative Causal Inference with Fair Incentives. In Proceedings of the 40th International Conference on Machine Learning, 2023.

 

Author