- Rust 85.9%
- C 9.4%
- CMake 2.8%
- Just 1.3%
- Shell 0.6%
| .githooks | ||
| .well-known | ||
| bindings | ||
| c_examples | ||
| examples | ||
| src | ||
| .gitignore | ||
| .gitmodules | ||
| ACKNOWLEDGEMENTS.md | ||
| Cargo.toml | ||
| cbindgen.toml | ||
| CODE_OF_CONDUCT.md | ||
| justfile | ||
| LICENSE | ||
| README.md | ||
| TODO.md | ||
Elara Math (original edition)
Elara Math is a Rust-native math library, developed as part of Project Elara's suite of open-source software libraries. It contains support for:
- Tensors: N-dimensional arrays with built-in support for automatic differentiation)* (supported)
- Basic vectorized opertions and linear algebra with tensors (supported)
- Numerical solvers for integrals & differential equations (partially supported**, work-in-progress)
- Basic machine learning tools for building feedforward fully-connected neural networks, with two APIs: one PyTorch-style, one TensorFlow-style (supported)
*: GPU tensors are not available yet, but GPU acceleration is planned to be added in the future
**: Numerical integration is supported out-of-the-box. However, the numerical differential equation solver has been (temporarily) moved to a separate library, elara-array, which is currently being developed in parallel.
elara-mathis public domain software like the rest of Project Elara, meaning it is essentially unlicensed software, so you can use it for basically any project you want, however you want, with or without attribution.
Shoutouts: Acknowledgements
It is intended to both contain a set of ready-to-use solvers and vectorized math on NumPy-style N-dimensional arrays, as well as a flexible user-friendly API that can be used to make more specialized/advanced libraries for computational tasks.
Disclaimer
This repository hosts the original edition of the elara-math library. Please note that with our recent move to prioritizing MIT-licensed community edition libraries as our recommended avenue for open-source contributors (see elara-array-community, elara-gfx-community, and elara-math-community), contributors are strongly encouraged to send contributions to the community edition repositories. All original edition libraries will still be developed, but at a much, much slower pace. If you are a contributor, please only contribute to the original edition libraries if:
- You are okay dedicating your work to the public domain and thus making it un-copyrighted
- You accept that your work is likely not going to make it to the most actively-developed parts of Project Elara
We are aware that these are major dealbreakers for a plurality of developers, which is why the community editions exist in the first place. By being MIT-licensed and encouraged for contributors, there isn't as much of a hurdle/sacrifice involved in contributing. That being said, Project Elara's original edition libraries are still essential - they form the public domain core of the project, which we do believe tremendously in the value in. So if you are sure you want to dedicate your work to the public domain, go ahead and contribute here. Otherwise, send your contributions to the community edition of this library!
Demo
As an example, here is a working tiny neural network using elara-math and its companion library elara-log (elara-log is automatically installed when you install elara-math), ported from this excellent Python demo:
use elara_log::prelude::*;
use elara_math::prelude::*;
const EPOCHS: usize = 10000;
const LR: f64 = 1e-5;
fn forward_pass(data: &Tensor, weights: &Tensor, biases: &Tensor) -> Tensor {
(&data.matmul(&weights) + biases).relu()
}
fn main() {
// Initialize logging library
Logger::new().init().unwrap();
let train_data = tensor![
[0.0, 0.0, 1.0],
[1.0, 1.0, 1.0],
[1.0, 0.0, 1.0],
[0.0, 1.0, 1.0]];
let train_labels = tensor![
[0.0],
[1.0],
[1.0],
[0.0]
].reshape([4, 1]);
let weights = Tensor::rand([3, 1]);
let biases = Tensor::rand([4, 1]);
println!("Weights before training:\n{:?}", weights);
for epoch in 0..(EPOCHS + 1) {
let output = forward_pass(&train_data, &weights, &biases);
let loss = elara_math::mse(&output, &train_labels);
println!("Epoch {}, loss {:?}", epoch, loss);
loss.backward();
weights.update(LR);
weights.zero_grad();
biases.update(LR);
biases.zero_grad();
}
let pred_data = tensor![[1.0, 0.0, 0.0]];
let pred = forward_pass(&pred_data, &weights, &biases);
println!("Weights after training:\n{:?}", weights);
println!("Prediction [1, 0, 0] -> {:?}", pred);
}
For more examples, including basic usage of tensors, using automatic differentiation, and building more complex neural networks, please feel free to see the examples folder.
Usage
To use elara-math for your own project, simply add it to your project with Cargo:
cargo add elara-math elara-log-ng
Then in your code, just import the library:
use elara_log::prelude::*; // this is required
use elara_math::prelude::*; // load prelude
fn main() {
// Initialize elara-math's logging
// library first
Logger::new().init().unwrap();
// rest of your code
// ...
}
The library's prelude is designed for user-friendliness and contains a variety of modules pre-loaded. For those who want finer-grained control, you can individually import the modules you need.
Bindings
Elara Math has experimental C/C++ bindings generated by cbindgen. The header file for the library is available in the bindings/ folder (elara_math.h).
An example is available in c_examples/autograd.c. It assumes you have first built the library with cargo build and have CMake installed.
⚠️ Right now, the C/C++ bindings are not yet functional due to NdArray and parts of the Rust standard library not having a stable ABI, which means that they are not supported by
#[repr(C)]. We will continue to work on the bindings to bring them to usable readiness with time, but they cannot be used as of right now.
Developing
To develop elara-math, first clone the repository:
git clone https://github.com/elaraproject/elara-math
git submodule update --init --recursive
Then, copy over the pre-commit githook:
cp .githooks/pre-commit .git/hooks/pre-commit && chmod a+x .git/hooks/pre-commit
You should then be all set to start making changes!
Contributors
- Jacky Song
- Jerry Teng