About | Installation | Usage | Measures and examples | Contact |

### About ⇧

Infotheory, written in C++, and usable in Python as well, is a software package to perform information theoretic analysis on multivariate data. This package implements traditional as well as more recent measures that arise from multivariate extensions to information theory, specifically

- Entropy [1]
- Mutual Information [2]
- Partial Information Decomposition [3]
- Unique Information
- Redundant Information
- Synergistic Information

The main highlights of this package include:

- written in C++ for efficiency
- ease of use via python bindings and compatibility with numpy
- an API that allows adding the data once to then be able to perform various analyses across different sub-spaces of the dataset very quickly
- use of sparse data structures that work well with high-dimensional data
- user-controllable estimation of data distribution using averaged shifted histograms [4]
- flexibilty to specify binning allows proper estimation of information measures between continuous and discrete variables
- perform PI-decomposition over 3 (two sources and 1 target) and 4 (three variables and 1 target) variables

The package is available open-source on GitHub. While the C++ headers should function on any platform, in its current release the python package has been tested on Linux and MacOS.

#### Citation

If you use this package, please cite preprint available at: https://arxiv.org/abs/1907.02339

*Candadai, M., & Izquierdo, E. J. (2019). infotheory: A C++/Python package for multivariate information theoretic analysis. arXiv preprint arXiv:1907.02339.*

```
@article{candadai2019infotheory,
title={infotheory: A C++/Python package for multivariate information theoretic analysis},
author={Candadai, Madhavun and Izquierdo, Eduardo J},
journal={arXiv preprint arXiv:1907.02339},
year={2019}
}
```

### Installation⇧

**Python**

From your terminal

`pip install --upgrade infotheory`

In MacOS, you might have to add two environment variables using

`export CXXFLAGS="-mmacosx-version-min=10.9"`

`export LDFLAGS="-mmacosx-version-min=10.9"`

**C++**

Simply download Infotools.h and VectorMatrix.h , place them in your source directory and include Infotools in your source like any other header as follows

`#include "Infotools.h"`

Since Infotools accepts TVector arrays as arguments, you will also need to include VectorMatrix.h, which contains the TVector class and use objects of TVector to add data and invoke metrics.

`#include "VectorMatrix.h"`

### Usage⇧

Examples and benchmarks are available here.

Infotheory is a package that has been designed to make it easy and intuitive to use. The steps involved in using the package in C++ as well as Python are as follows:

**1. Create the object**

The following are required to create and object and setup the analyses - $dims$, the total dimensionality of all variables combined; $nreps$, the number of shifted binnings that should be averaged across;. $nreps=0$ does no shifted binning and is ideal for discrete valued data and any non-negative value of nreps produces that many shifts on each side to average over. Once this data is available, objects in python can be created as follows

```
import infotheory
dims = 3
nreps = 0
it = infotheory.InfoTools(dims, nreps)
```

**2. Specify how the data should be binned**

Since the package uses a sparse representation, it is required that the bin boundaries are pre-specified before adding the data. This could either be done manually or the user could ask the package to create equal width bins by providing appropriate data.

a. To specify equal width bins, the following arguments are required: $nbins$, a list specifying the number of bins for data along each dimension; $maxs$, a list specifying the maximum value of the data along each dimension; and finally $mins$, a list specifying the minimum value of the data along each dimension

```
nbins = [2, 3, 3] # for each dim
mins = [0, 0, 0] # for each dim
maxs = [1, 1, 1] # for each dim
it.set_equal_interval_binning(nbins, mins, maxs)
```

b. Alternatively one can explicitly specify the bin boundaries by providing the left boundary for each bin along each dimension of the data.

```
boundaries = [[0,0.5], [0, 0.5, 1], [0, 0.5, 1]]
it.set_bin_boundaries(boundaries)
```

**3. Adding data**

Data is added either one at a time, or all at a time, such that each datapoint is a vector of length $dims$ i.e. a single concatenated vector of all random variables. When adding multiple data points at once, a list of lists will need to be used.

```
for _ in range(1000): # adding 1000 random points
it.add_data_point(np.random.rand(dims))
# Alternatively
# it.add_data(np.random.rand(1000, dims))
```

**4. Invoking information theoretic tool**

Once all data has been added, the different information theoretic metrics that are available as part of the package can be invoked. Since each data point was added as a singe concatenated vector, the different dimensions need to be identified while invoking different metrics. This is done by passing an argument along with the call which will be a list of length $dims$, and contain an ID to match dimensions belonging to the same variable. For instance, with $dims=3$, if the first two dimensions represent the first variable and the next represents the second variable, this list, $var\_ids=[0,0,1]$. This allows for the flexibility of analyzing sub-spaces of the data by using $-1$ to ignore dimensions for a particular analysis. For the same $dims$ setup mentioned here,

`mi = it.mutual_info([0, 0, 1])`

measures mutual information between two 2D random variables that are available in the first two and the third dimension of the dataset respectively. With the same dataset, one can then go on to invoke

`mi = it.mutual_info([0, -1, 1])`

to measure the mutual information between just the first dimension of the first random variable and the second. As a rule, dimensions with $-1$ are ignored. As one might have guessed, for entropy only the $0$ values are paid attention to and for PID the list requires $0, 1$ and $2$ to identify the three variables in the dataset, where $0$ is always the variable about which information is estimated.

This final step can be repeated for all analyses and this whole process can be started over for a new dataset. Below, you can find sample code in Python on how to use this package to estimate mutual information between two 2D random variables. See this page for a detailed account of the different measures available and sample code for all of them.

API:

A demo in C++ for the same program is available here.

### Contact⇧

Having trouble with Infotheory? Want to contribute? Contact Madhavun at madvncv [at] gmail.com

### License⇧

#### MIT License

A short and simple permissive license with conditions only requiring preservation of copyright and license notices. Licensed works, modifications, and larger works may be distributed under different terms and without source code.

##### Permissions

- Commercial use
- Modification
- Distribution
- Private use

##### Limitations

- Liability
- Warranty

##### Conditions

- License and copyright notice

This is not legal advice. Learn more about repository licenses.

### Acknowledgement⇧

The work in this paper was supported in part by NSF grant No. IIS-1524647. M.C. was funded by an assistantship from the Program in Cognitive Science, Indiana University, Bloomington. The authors would like to thank Randall Beer for VectorMatrix, the C++ vector libraries used in this package.

### References⇧

- http://www.scholarpedia.org/article/Entropy#Shannon_entropy
- http://www.scholarpedia.org/article/Mutual_information
- Williams, P. L., & Beer, R. D. (2010). Nonnegative decomposition of multivariate information. arXiv preprint arXiv:1004.2515.
- Scott, D. W. (1985). Averaged shifted histograms: effective nonparametric density estimators in several dimensions. The Annals of Statistics, 1024-1040.
- Timme, N., Alford, W., Flecker, B., & Beggs, J. M. (2014). Synergy, redundancy, and multivariate information measures: an experimentalistâ€™s perspective. Journal of computational neuroscience, 36(2), 119-140.