Drawing causal conclusions from observed statistical dependencies is a fundamental problem. Conditional independence-based causal discovery (e.g., PC or FCI) cannot be used in case there are no observed conditional independences. Alternative methods investigate a different set of assumptions, namely restricting the function class, e.g., additive noise models. In this talk, I will present two causal inference methods employing different kind of assumptions than the above.

The first is a method to infer the existence and identify a finite confounder of a set of observed dependent variables. It is based on a kernel method to identify finite mixtures of nonparametric product distributions. In the second part, I will focus on the problem of causal inference in the two-variable case (assuming no confounders). A known assumption is used, namely that if X causes Y, P(X) contains no information about P(Y|X). In contrast, P(Y) may contain information about P(X|Y). This asymmetry has implications for common machine learning tasks such as semi-supervised and unsupervised learning, which are employed by the proposed causal inference method.