Skip to content

A matrix and neural network library written in rust

License

Apache-2.0, MIT licenses found

Licenses found

Apache-2.0
LICENSE-APACHE
MIT
LICENSE-MIT
Notifications You must be signed in to change notification settings

Jafagervik/kaffe

Repository files navigation

Kaffe - A Pytorch inspired library written in rust

Crates.io Documentation Coverage Status Maintenance

Kaffe is as per the title, a way to create Neural Networks in Rust.

The goal is to create a simple way to write your own models, and test them. Syntax should be familiar to pytorch, but some features might take names from numpy or even tensorflow.

In the future, the matrix library might be completely moved to its own project, but for now they're all in the same crate;

Why? Because sometimes you wanna make cool and fast stuff in rust :)

Examples

Matrix basic example

use kaffe::tensor::Tensor;

fn main() {
    let t = Tensor::init(10f32, vec![2, 2, 2]);

    let res = t.log(10.0);

    println!("{:?}", res.data);

    let tensor = Tensor::randomize_range(1.0, 4.0, vec![2, 4]);

    assert_eq!(tensor.all(|&e| e >= 1.0), true);

    let tensor = Tensor::init(20.0, vec![2, 2]);
    let value: f32 = 2.0;

    let result_mat = tensor.div_val(value);

    assert_eq!(result_mat.data, vec![10.0; 4]);

    let tensor = Tensor::init(4f32, vec![1, 1, 1, 4]);

    assert_eq!(tensor.data, vec![4f32; 4]);
    assert_eq!(tensor.shape, vec![1, 1, 1, 4]);

    let mut tensor = Tensor::init(2.0, vec![2, 4]);
    println!("{}", tensor.data[0]);

    tensor.set_where(|e| {
        if *e == 2.0 {
            *e = 2.3;
        }
    });

    println!("{}", tensor.data[0]);

    assert_eq!(tensor.data[0], 2.3);

    println!("{}", tensor.get(vec![0, 0]).unwrap());
}

Neural net basic example - To Be Implemented

use kaffe::Matrix;
use kaffe::{Net, Layer, optimizer::*, loss::*};

// Here lies our model 
struct MyNet {
    layers: Vec<Layer>
}

// Implement default functions
impl Net for MyNet {
    /// Set's up parametes for the struct 
    fn init() -> Self {
        let mut layers: Vec<Layers> = Vec::new();
        self.layers.push(nn.Conv2d(1,32,3,1));
        self.layers.push(nn.Conv2d(32,64,3,1));
        self.layers.push(nn.Dropout(0.25));
        self.layers.push(nn.Dropout(0.5));
        self.layers.push(nn.FCL(9216, 128));
        self.layers.push(nn.FCL(128,10));

        Self { layers }
    }

    /// Define a forward pass 
    fn forward(x: &Matrix) {
        x = layers[0](x)
        x = ReLU(x);
        x = layers[1](x)
        x = ReLU(x);
        x = layers[2](x)
        x = ReLU(x);
        let output = log_softmax(x);
        return output;
    }
}

fn train(model: &Model, 
        train_dataloader: &DataLoader, 
        optimizer: &Optimizer, 
        epoch: usize) {
    model.train();

    for (batch_idx, (data, target)) in train_dataloader.iter().enumerate() {
        optimizer.zero_grad();
        let output = model(data);
        let loss = BCELoss(output, target);
        loss.backward();
        optimizer.step();
    }
}

fn test(model: &Model, 
        test_dataloader: &DataLoader, 
        optimizer: &Optimizer, 
        epoch: usize) {
    model.eval();

    let mut test_loss = 0.0;
    let mut correct = 0.0;

    optimizer.no_grad();

    for (batch_idx, (data, target)) in train_dataloader.iter().enumerate() {
        let output = model(data);
        test_loss += BCELoss(output, target);

        let pred = output.argmax(Dimension::Row);
        correct += pred.eq(target.view_as(pred)).sum();
    }
    test_loss /= test_dataloader.count();
}

fn main() {
    let d1 = download_dataset(url, "../data", true, true, transform);
    let d2 = download_dataset(url, "../data", false, false, transform);

    let train_dl = DataLoader::new(&d1);
    let test_dl = DataLoader::new(&d2);

    let model = Net::init();
    let optimizer = SGD::init(0.001, 0.8);

    for epoch in 1..EPOCHS+1 {
        train(&model, &train_dl, &optimizer, epoch);
        test(&model, &test_dl, &optimizer, epoch);
    }

    if args.SAVE_MODEL {
        model.save_model("mnist_test.kaffe_pt");        
    }
}

GPU Support

As per right now, support for training on GPU is not happening anytime soon. Although.. transpilation IS a thing you know.

For more examples, please see examples

Documentation

Full API documentation can be found here.

Features

  • Blazingly fast
  • Common tensor operations exists under tensor module
  • Optimizers
  • Support for both f32 and f64
  • ReLU, GeLU, PReLU, Sigmoid
  • Basic neural net features

About

A matrix and neural network library written in rust

Resources

License

Apache-2.0, MIT licenses found

Licenses found

Apache-2.0
LICENSE-APACHE
MIT
LICENSE-MIT

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages