AI is coming to Linera!

We’re happy to share that we created our very first AI demo application.

The application is built on top of the Rust library candle by Huggingface and is running a Llama2/TinyStories model in CPU with SIMD Wasm instructions.

Interestingly, in this demo, the AI model runs on the client side, in the read-only, non-metered part of the Linera application (aka the service part).

This mode of operation is great to run user AI queries off-chain (hence privately) while leveraging the security of the Linera VM and the immutability / transparency of model parameters. This mechanism is also the first step towards on-chain AI inference (within transactions) in Linera microchains.

This is just the beginning! Stay tuned for future improvements and new AI demos!

4 Likes

Hi Mathieu - I got some great feedback from JV that helped me come up with the interpretation below. Is there anything you’d want to add/correct?

Summary
The demo app demonstrates how we can securely store and trustlessly serve model parameters to an AI using Linera and receive the AI’s response privately.

Key Points

  1. An interactive AI application has been integrated with Linera for the first time
  2. It will work with Linera’s future browser extension client
  3. The parameters for the model used by the application are stored within a smart contract on a microchain (immutable and transparent) where anyone can inspect them
  4. Linera’s validator network ensures that the correct model is stored on-chain and served correctly to users
  5. In this case using it is free as it runs on the non-metered Linera service
  6. Linera’s non-metered service is ideal for running user queries locally (inside the user’s browser extension) including AI queries, without incurring gas fees
  7. This model is also the first step towards proving how AI could integrate with Linera in a use case that would lead to related on-chain transactions.
2 Likes