Stores

Stores are a database agnostic way to persist data in your skill. Thare have a simple intreface that handles CRUD operations.

Current Adapters

Currently, the Spruce Platform has 4 adapters for Stores:

  1. NeDb: For testing and development purposes.
  2. MongoDb: The default adapter for production.
  3. Postgres: Can be enabled for production and/or tests.
  4. ChromaDb: A vector-based database for semantic search.

Stores in Development

During development, by default, the Stores layer will utilize a MongoDb in memory store called NeDb. While this database is no longer supported, it provides enough functionality to work for testing and development purposes.

Seeding Data

When you are in development, you may want to seed your store with data so you can test against it. Luckily, that’s a pretty easy thing to do! Let’s walk through it!

For this scenario, we’re going to ensure that our listener returns the expected results from the Database. We’ll start with some listener tests that have already been created.

Test 1: Add the @seed(...) decorator
import { AbstractSpruceFixtureTest } from '@sprucelabs/spruce-test-fixtures'
import { test } from '@sprucelabs/test-utils'
import { crudAssert } from '@sprucelabs/spruce-crud-utils'

export default class RootSkillViewTest extends AbstractSpruceFixtureTest {
    @test()
    protected static async rendersMaster() {
        const vc = this.views.Controller('eightbitstories.root', {})
        crudAssert.skillViewRendersMasterView(]vc)
    }
}

Custom Data

Stores in Production

Chroma Data Store

Give your skill the ability to store and retrieve data from a Chroma database for vector based searching. This gives your Data Store the ability to handle semantic and nearest neighbor searches.

Running Chroma

  1. Clone the @sprucelabs/chroma-data-store repository
  2. cd into the repository
  3. Run yarn start.chroma.docker

Setting an embedding model

By default , the ChromaDabatase class will use llama3.2 hosted through Ollama to generate embeddings

Installing Ollama

  1. Visit https://ollama.com
  2. Click “Download”
  3. Select your OS

Installing Llama3.2

Llama 3.2 is the newest version of Llama (as of this writing) that supports embeddings.

  1. Inside of terminal, run ollama run llama3.2
  2. You should be able to visit http://localhost:11434/api/embeddings and get a 404 response (this is because the route only accepts POST requests)

Connecting to Chroma

Here are the steps to configure your skill to use ChromaDatabase

Step 1: Installing the Chroma Adapter

Inside your skill’s directory run:

yarn install @sprucelabs/chroma-data-store
Step 2: Enabling the adapter

Inside your skill’s directory run:

Coming soon.

Improving embeddings with nomic-embed-text

We have seen significantly better search performance when using nomic-embed-text to generate embeddings.

Step 1: Installing nomic-embed-text

Run the following in your terminal:

ollama run nomic-embed-text
Step 2: Configuring nomic-embed-text in your skill

Add the following to your skill’s .env:

CHROMA_EMBEDDING_MODEL="nomic-embed-text"

Something Missing?

Request Documentation Enhancement
It looks like you are using Internet Explorer. While the basic content is available, this is no longer a supported browser by the manufacturer, and no attention is being given to having IE work well here.