Skip to main content

Causing: CAUSal INterpretation using Graphs

Project description

Causing: CAUSal INterpretation using Graphs

License: MIT Python 3.7

Causing is a multivariate graphical analysis tool helping you to interpret the causal effects of a given equation system. Get a nice colored graph and immediately understand the causal effects between the variables.

Input: You simply have to put in a dataset and provide an equation system in form of a python function. The endogenous variable on the left had side are assumed being caused by the variables on the right hand sight of the equation. Thus, you provide the causal structure in form of an directed acyclic graph (DAG).

Output: As an output you will get a colored graph of quantified effects acting between the model variables. You are able to immediately interpret mediation chains for every individual observation - even for highly complex nonlinear systems.

Further, the method enables model validation. The effects are estimated using a structural neural network. You can check wether your assumed model fits the data. Testing for significance of each individual effect guides you in how to modify and further develop the model. The method can be applied to highly latent models with many of the modeled endogenous variables being unboserved.

Here is a table relating Causing to to other approaches:

Causing is Causing is NOT
  causal model given   causal search
  DAG directed acyclic graph   cyclic, undirected or bidirected graph
  latent variables   just observed / manifest variables
  individual effects   just average effects
  direct, total and mediation effects   just total effects
  linear algebra effect formulas   no iterative do-calculus rules
  local identification via ridge regression   check of global identification rules
  one regression for all effects   individual counterfactual analysis
  structural model   reduced model
  small data   big data requirement
  supervised learning   unsupervised learning
  minimizing sum of squared erros   fitting covariance matix
  model estimation plus validation   just model estimation
  graphical results   just numerical results
  XAI explainable AI   black box neural network

The Causing approach is quite flexible. The most severe restriction certainly is that you need to specify the causal model / causal ordering. If you know the causal ordering but not the specific equations, you can let the Causing model estimate a linear relationship. Just plug in sensible starting values.

Further, exogenous variables are assumed to be observed and deterministic. Endogenous variables instead may be manifest or latent and they might have error correlated terms. Error terms are not modeled explicitly, they are automatically dealt with in the regression / backpropagation estimation.

Introduction Video

This 5 minute introductory video gives you a short overview and a real data example:

See Causing_Introduction_Video

Software

Causing is a free software written in Python 3. It makes use of PyTorch for automatic computation of total derivatives and SymPy for partial algebraic derivatives. Graphs are generated using Graphviz and PDF output is done by Reportlab.

See requirements.txt

Effects

Causing provides direct, total and mediation effects. Using the given equation system, they are computed for individual observations and their total. Also, the average effects are estimated by fitting to the observed data. The respective effects are abbreviated as:

Effects Direct Total Mediation
Average effects ADE ATE AME
Estimated effects EDE ETE EME
Individual effects IDE ITE IME

Model Validation

To evaluate estimation, t-values are reported. To evaluate the necessity of effects, t-values with respect to zero are shown (estimated effect divided by its standard deviation). These t-values are expected to be significant. i.e. larger than two in absulute value. Insignificant effects could indicate possible model simplifications.

To evaluatee the validity of the hypothesized model , t-values with respect to the hypothesized average model effects are used (estimated minus average effect and then divided by its standard deviation). In this case, significant deviations could suggest a model refinement.

t-values Direct Total Mediation
t-values wrt. zero ED0 ET0 EM0
t-values wrt. model ED1 ET1 EM1

Finally, for every equation we separately estimate a constant / bias term to quickly find possibly misspecified equations. This could be the case for significant biases.

Abstract

We propose simple linear algebra formulas for the causal analysis of equation systems. The effect of one variable on another is the total derivative. We extend them to endogenous system variables. These total effects are identical to the effects used in graph theory and its do-calculus. Further, we define mediation effects, decomposing the total effect of one variable on a final variable of interest over all its directly caused variables. This allows for an easy but in-depth causal and mediation analysis.

To estimate the given theoretical model we define a structural neural network (SNN). The network's nodes are represented by the model variables and its edge weights are given by the direct effects. Identification could be given by zero restrictions on direct effects implied by the equation model provided. Otherwise, identification is automatically achieved via ridge regression / weight decay. We choose the regularization parameter minimizing out-of-sample sum of squared errors subject to at least yielding a well conditioned positive-definite Hessian, being evaluated at the estimated direct effects.

Unlike classical deep neural networks, we follow a sparse and 'small data' approach. Estimation of structural direct effects is done using PyTorch and automatic differentiation taylormade for fast backpropagation. We make use of our closed form effect formulas in order to compute mediation effects. The gradient and Hessian are also given in analytic form.

Keywords: total derivative, graphical effect, graph theory, do-Calculus, structural neural network, linear Simultaneous Equations Model (SEM), Structural Causal Model (SCM), insurance rating

Citation

The Causing approach and its formulas together with an application are given in:

Bartel, Holger (2020), "Causal Analysis - With an Application to Insurance Ratings" DOI: 10.13140/RG.2.2.31524.83848 https://www.researchgate.net/publication/339091133

Note that in this paper the mediation effects on the final variable of interest are called final effects.

Example

Assume a model defined by the equation system:

Y1 = X1

Y2 = X2 + 2 * Y12

Y3 = Y1 + Y2.

This gives the following graphs. Some notes are in order to understand them:

  • The data used consist of 200 observations. They are available for the x variables X1 and X2 with mean(X1) = 3 and mean(X2) = 2. Variables Y1 and Y2 are assumed to be latent / unobserved. Y3 is assumed to be manifest / observed. Therefore 200 observations are available for Y3.

  • Average effects are based on the hypothesized model. The median values of all exogenous data is put into the given graph function, giving the corresponding endogenous values. The effects are computed at this point.

  • Individual effects are also based on the hypothesized model. For each individual, however its own exogenous data is put into the given graph function to yiel the corresponding endogenous values. The effects are computed at this individual point.

  • Estimated effects are based on the hypothesized model: The zero restrictions (effects being always exactly zero by model construction) are carried over and the average hypothesized effects are used as starting values. However, effects are estimated by fitting a linearized approximate model using a structural neural network. Effects are fitted by minimizing squared errors of observed endogenous variables. This corresponds to a nonlinear structural regression of Y3 on X1, X2 using all 200 observations.

  • Mediation effects are shown exemplary for the final variable of interest, assumed here to be Y3. In the mediation graph the nodes depict the total effect of that variable on Y3. This effect is partitioned over all outgoing edges, representing the mediation effects and thus enabling path interpretation. Note however that incoming edges do not sum up to the node value.

  • Individual effects are shown exemplary for individual no. 1 out of the 200 observations. To ease their interpretation, each individual effect is multiplied by the absolute difference of its causing variable to the median of all observations. Further, we color nodes and edges, showing positive (green) and negative (red) effects these deviations have on the final variable Y3.

Effects Direct Total Mediation for Y3
Average effects Average Direct Effects (ADE) Average Total Effects (ATE) Average Mediation Effects (AME)
Estimated effects Estimated Direct Effects (EDE) Estimated Total Effects (ETE) Estimated Mediation Effects (EME)
Individual effects for individual no. 1 Individual Direct Effects (IDE) Individual Total Effects (ITE) Individual Mediation Effects (IME)

As you can see in the bottom right graph for the individual mediation effects (IME), there is one green path starting at X1 passing through Y1, Y2 and finally ending in Y3. This means that X1 is the main cause for Y3 taking on a value above average with its effect being +37.44. However, this positive effect is slightly reduced by X2. In total, accounting for all exogenous and endogenous effects, Y3 is +29.34 above average. You can understand at one glance why Y3 is above average for individual no. 1.

The t-values corresponding to the estimated effects are also given as graphs. To asses model validation using the t-value graphs note the following:

  • Estimated standard errors for the effects are derived from the Hessian. Test and t-vales are asymptotically correct, but in small samples they suffer from the effects being biased in the case of regularization.

  • In this example regularization is required. The minimal regularization parameter is 0.000950 to obtain a well-posed optimization problem with a positive-definite Hessian. The optimal regularization parameter minimizing out-of-sample squared errors is 0.001545.

  • The t-values with respect to zero should be larger than two in absolute value, indicating that the specified model structure indeed yields significant effects.

  • The t-values with respect to the hypothesized model effects should be smaller than two in absolute value, indicating that there is no severe devation between model and data.

  • For the mediation t-value graphs EM0 and EM1 the outgoing edges do not some up to its outgoing node. In the EM0 graph all outgoing edges are even identical to their outgoing node because effects and standard deviations are partioned in the same way over their outgoing edges thus cancelling out in the t-values. However, this is not true for the EM1 graph since different partitioning schemes are used for the estimated and subtracted hypothesized model effects.

Effects Direct Total Mediation for Y3
t-values wrt. zero Estimated Direct Effects (ED0) Estimated Total Effects (ET0) Estimated Mediation Effects (EM0)
t-values wrt. model Estimated Direct Effects (ED1) Estimated Total Effects (ET1) Estimated Mediation Effects (EM1)

The t-values with respect to zero show that just some of the estimated effects are significant. This could be due to the small sample size. In this example we estimate five direct effects from 200 observations with the only observable endogenous variable being Y3.

None of the t-values with respect to the hypothesized model values is significant. This means that the specified model fits well to the observed data.

Biases are estimated for each endogenous variable. Estimation is done at the point of average effects implied by the specified model. That is, possible model misspecifications are captured by a single bias, one at at time. Biases therefore are just one simple way to detect wrong modeling assumptions.

Variable Bias value Bias t-value
Y1 0.00 0.64
Y2 0.06 0.55
Y3 0.06 0.55

In our example none of the biases is significant, further supporting correctness of model specification.

A Real World Example

To dig a bit deeper, here we have a real world example from social sciences. We analyze how the wage earned by young American workers is determined by their educational attainment, family characteristics, and test scores.

See education.md

Start your own Model

When starting causing.py after cloning / downloading the Causing repository you will find the example results described above in the output folder. They are given as PDF files and the single graphs are also provided as PNG files for further use.

At the bottom of causing.py the example is called via model_dat = models.example(). To start your own model create a function, e. g. mymodel, in module models and generate the corresponding model data via model_dat = models.mymodel(). Then start causing.py.

You have to provide the following information, as done in the example code below:

  • Define all your model variables as SymPy symbols.
  • In define_equations define a python SymPy function containing the model equations and returning them in topological order, that is, in order of computation. Note that in Sympy some operators are special, e.g. Max() instead of max().
  • In model_dat, the dictionary to be returned, further specify
    • xvars: exogenous variables corresponding to data xdat
    • yvars: endogenous variables in topological order
    • ymvars: set of manifest / observed endogenous variables corresponding to data ymdat
    • final_var: the final variable of interest used for mediation effects
    • show_nr_indiv: to show individual effects only for the first individuals set this variable to a value smaller than the sample size, saves computation time
    • dir_path: directory path where the output is written to
  • load your data xdat and ymdat.

In the example case the python SymPy function looks like this:

def example():
    """model example"""

    X1, X2, Y1, Y2, Y3 = symbols(["X1", "X2", "Y1", "Y2", "Y3"])

    def define_equations(X1, X2):

        eq_Y1 = X1
        eq_Y2 = X2 + 2 * Y1**2
        eq_Y3 = Y1 + Y2

        return eq_Y1, eq_Y2, eq_Y3

    model_dat = {
        "define_equations": define_equations,   # equations in topological order
        "xvars": [X1, X2],                      # exogenous variables corresponding to data
        "yvars": [Y1, Y2, Y3],                  # endogenous variables in topological order
        "ymvars": [Y3],                         # manifest endogenous variables
        "final_var": Y3,                        # final variable of interest, for mediation analysis
        "show_nr_indiv": 3,                     # show first individual effects
        "estimate_bias": True,                  # estimate equation biases, for model validation
        "alpha": None,                          # regularization parameter, is estimated if None
        "dir_path": "output/",                  # output directory path
        }

    # load data
    from numpy import loadtxt
    xdat = loadtxt("data/xdat.csv", delimiter=",").reshape(len(model_dat["xvars"]), -1)
    ymdat = loadtxt("data/ymdat.csv", delimiter=",").reshape(len(model_dat["ymvars"]), -1)

    model_dat["xdat"] = xdat                    # exogenous data
    model_dat["ymdat"] = ymdat                  # manifest endogenous data

    return model_dat

Start causing.py and view the generated graphs in the output folder. The file Causing_Average_and_Estimated_Effects.pdf contains the average effects (ADE, ATE, AME) based on the median xdat observation as well as the estimated effects (EDE, ETE, EME) using the observed endogenous data ymdat.

Causing_tvalues_and_Biases.pdf contains the t-values graphs with respect to zero (ED0, ET0, EM0) andthe t-values graphs with respect to the hypothesied model (ED1, ET1, EM1). Also inclused are the estimated biases and their t-values for further model validation.

The enumerated files Causing_Individual_Effects*.png show the individual effects (IDE, ITE, IME) for the respective individual. In addition to the Individual Mediation Effects (IME) a table is given, listing the data of the IME nodes in decreasing order. It helps to identify the variables having the most positive and negative effects on the final variable for that individual.

Award

RealRate's AI software Causing is a winner of PyTorch AI Hackathon.

October 2020: We are very happy to announce that the RealRate AI software was announced a winner of the PyTorch Summer Hackathon 2020 in the Responsible AI category. This is quite an honor given that more than 2500 teams submitted their projects.

devpost.com/software/realrate-explainable-ai-for-company-ratings.

Causing means CAUSal INterpretation using Graphs. Causing is a tool for Explainable AI (XAI). We explain causality and ensure fair treatment.

The software is developed by RealRate, an AI rating agency aiming to re-invent the ratings market by using AI, interpretability and avoiding any conflict of interest. See www.realrate.de.

License

Causing is available under MIT license. See LICENSE.

Consulting

If you need help with your project, please contact me. I could perform the data analytics or adapt the software to your special needs.

Dr. Holger Bartel
RealRate GmbH
Cecilienstr. 14, D-12307 Berlin
holger.bartel@realrate.de
Phone: +49 160 957 90
www.realrate.de

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

causing-0.0.2.tar.gz (11.1 kB view details)

Uploaded Source

Built Distribution

causing-0.0.2-py3-none-any.whl (8.9 kB view details)

Uploaded Python 3

File details

Details for the file causing-0.0.2.tar.gz.

File metadata

  • Download URL: causing-0.0.2.tar.gz
  • Upload date:
  • Size: 11.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.6.1 requests/2.25.0 setuptools/46.1.3 requests-toolbelt/0.9.1 tqdm/4.54.1 CPython/3.8.5

File hashes

Hashes for causing-0.0.2.tar.gz
Algorithm Hash digest
SHA256 143466504a33fbdfb66a9bb6d83e6c05c5a1ee9a03ad54ba55b85f7294833a2e
MD5 c0e604d41f1e81f71571e7b6e709b3eb
BLAKE2b-256 6337b81da466975c019ff6a801a3f4fff4de820af9175e96fc1598367c265044

See more details on using hashes here.

Provenance

File details

Details for the file causing-0.0.2-py3-none-any.whl.

File metadata

  • Download URL: causing-0.0.2-py3-none-any.whl
  • Upload date:
  • Size: 8.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.6.1 requests/2.25.0 setuptools/46.1.3 requests-toolbelt/0.9.1 tqdm/4.54.1 CPython/3.8.5

File hashes

Hashes for causing-0.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 4666cd0ec328c061280e6b7fd74ec361541ab84283fafa76619dc09981961ab0
MD5 59840c42ad1d57de0dd42b00daf9f44a
BLAKE2b-256 c62114d14a154570412b045207201899901a103b5fc538954bedc3b7d8c2e1f1

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page