118 lines
6.2 KiB
Plaintext
118 lines
6.2 KiB
Plaintext
Metadata-Version: 2.3
|
|
Name: opt_einsum
|
|
Version: 3.4.0
|
|
Summary: Path optimization of einsum functions.
|
|
Author-email: Daniel Smith <dgasmith@icloud.com>
|
|
License-Expression: MIT
|
|
License-File: LICENSE
|
|
Classifier: Development Status :: 5 - Production/Stable
|
|
Classifier: Intended Audience :: Developers
|
|
Classifier: Intended Audience :: Science/Research
|
|
Classifier: License :: OSI Approved :: MIT License
|
|
Classifier: Programming Language :: Python
|
|
Classifier: Programming Language :: Python :: 3
|
|
Classifier: Programming Language :: Python :: 3 :: Only
|
|
Classifier: Programming Language :: Python :: 3.9
|
|
Classifier: Programming Language :: Python :: 3.10
|
|
Classifier: Programming Language :: Python :: 3.11
|
|
Classifier: Programming Language :: Python :: 3.12
|
|
Classifier: Programming Language :: Python :: 3.13
|
|
Classifier: Programming Language :: Python :: Implementation :: CPython
|
|
Classifier: Programming Language :: Python :: Implementation :: PyPy
|
|
Classifier: Topic :: Software Development :: Libraries :: Python Modules
|
|
Requires-Python: >=3.8
|
|
Description-Content-Type: text/markdown
|
|
|
|
# Optimized Einsum
|
|
|
|
[](https://github.com/dgasmith/opt_einsum/actions/workflows/Tests.yml)
|
|
[](https://codecov.io/gh/dgasmith/opt_einsum)
|
|
[](https://anaconda.org/conda-forge/opt_einsum)
|
|
[](https://pypi.org/project/opt-einsum/#description)
|
|
[](https://pypistats.org/packages/opt-einsum)
|
|
[](https://dgasmith.github.io/opt_einsum/)
|
|
[](https://doi.org/10.21105/joss.00753)
|
|
|
|
## Optimized Einsum: A tensor contraction order optimizer
|
|
|
|
Optimized einsum can significantly reduce the overall execution time of einsum-like expressions (e.g.,
|
|
[`np.einsum`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.einsum.html),
|
|
[`dask.array.einsum`](https://docs.dask.org/en/latest/array-api.html#dask.array.einsum),
|
|
[`pytorch.einsum`](https://pytorch.org/docs/stable/torch.html#torch.einsum),
|
|
[`tensorflow.einsum`](https://www.tensorflow.org/api_docs/python/tf/einsum),
|
|
)
|
|
by optimizing the expression's contraction order and dispatching many
|
|
operations to canonical BLAS, cuBLAS, or other specialized routines.
|
|
|
|
Optimized
|
|
einsum is agnostic to the backend and can handle NumPy, Dask, PyTorch,
|
|
Tensorflow, CuPy, Sparse, Theano, JAX, and Autograd arrays as well as potentially
|
|
any library which conforms to a standard API. See the
|
|
[**documentation**](https://dgasmith.github.io/opt_einsum/) for more
|
|
information.
|
|
|
|
## Example usage
|
|
|
|
The [`opt_einsum.contract`](https://dgasmith.github.io/opt_einsum/api_reference#opt_einsumcontract)
|
|
function can often act as a drop-in replacement for `einsum`
|
|
functions without further changes to the code while providing superior performance.
|
|
Here, a tensor contraction is performed with and without optimization:
|
|
|
|
```python
|
|
import numpy as np
|
|
from opt_einsum import contract
|
|
|
|
N = 10
|
|
C = np.random.rand(N, N)
|
|
I = np.random.rand(N, N, N, N)
|
|
|
|
%timeit np.einsum('pi,qj,ijkl,rk,sl->pqrs', C, C, I, C, C)
|
|
1 loops, best of 3: 934 ms per loop
|
|
|
|
%timeit contract('pi,qj,ijkl,rk,sl->pqrs', C, C, I, C, C)
|
|
1000 loops, best of 3: 324 us per loop
|
|
```
|
|
|
|
In this particular example, we see a ~3000x performance improvement which is
|
|
not uncommon when compared against unoptimized contractions. See the [backend
|
|
examples](https://dgasmith.github.io/opt_einsum/getting_started/backends)
|
|
for more information on using other backends.
|
|
|
|
## Features
|
|
|
|
The algorithms found in this repository often power the `einsum` optimizations
|
|
in many of the above projects. For example, the optimization of `np.einsum`
|
|
has been passed upstream and most of the same features that can be found in
|
|
this repository can be enabled with `np.einsum(..., optimize=True)`. However,
|
|
this repository often has more up to date algorithms for complex contractions.
|
|
|
|
The following capabilities are enabled by `opt_einsum`:
|
|
|
|
* Inspect [detailed information](https://dgasmith.github.io/opt_einsum/paths/introduction) about the path chosen.
|
|
* Perform contractions with [numerous backends](https://dgasmith.github.io/opt_einsum/getting_started/backends), including on the GPU and with libraries such as [TensorFlow](https://www.tensorflow.org) and [PyTorch](https://pytorch.org).
|
|
* Generate [reusable expressions](https://dgasmith.github.io/opt_einsum/getting_started/reusing_paths), potentially with [constant tensors](https://dgasmith.github.io/opt_einsum/getting_started/reusing_paths#specifying-constants), that can be compiled for greater performance.
|
|
* Use an arbitrary number of indices to find contractions for [hundreds or even thousands of tensors](https://dgasmith.github.io/opt_einsum/examples/large_expr_with_greedy).
|
|
* Share [intermediate computations](https://dgasmith.github.io/opt_einsum/getting_started/sharing_intermediates) among multiple contractions.
|
|
* Compute gradients of tensor contractions using [autograd](https://github.com/HIPS/autograd) or [jax](https://github.com/google/jax)
|
|
|
|
Please see the [documentation](https://dgasmith.github.io/opt_einsum/index) for more features!
|
|
|
|
## Installation
|
|
|
|
`opt_einsum` can either be installed via `pip install opt_einsum` or from conda `conda install opt_einsum -c conda-forge`.
|
|
See the installation [documentation](https://dgasmith.github.io/opt_einsum/getting_started/install) for further methods.
|
|
|
|
## Citation
|
|
|
|
If this code has benefited your research, please support us by citing:
|
|
|
|
Daniel G. A. Smith and Johnnie Gray, opt_einsum - A Python package for optimizing contraction order for einsum-like expressions. *Journal of Open Source Software*, **2018**, 3(26), 753
|
|
|
|
DOI: <https://doi.org/10.21105/joss.00753>
|
|
|
|
## Contributing
|
|
|
|
All contributions, bug reports, bug fixes, documentation improvements, enhancements, and ideas are welcome.
|
|
|
|
A detailed overview on how to contribute can be found in the [contributing guide](https://github.com/dgasmith/opt_einsum/blob/master/.github/CONTRIBUTING.md).
|