Stores
Stores are a database agnostic way to persist data in your skill. Thare have a simple intreface that handles CRUD operations.
createOne(...)
: Create a new record in the store.crate(...)
: Create many records in the store.updateOne(...)
: Update a record in the store.update(...)
: Update many records in the store.deleteOne(...)
: Delete a record in the store.delete(...)
: Delete many records in the store.findOne(...)
: Find a record in the store.find(...)
: Find many records in the store.scramble(...)
: Scramble a record in the store.
Current Adapters
Currently, the Spruce Platform has 4 adapters for Stores
:
- NeDb: For testing and development purposes.
- MongoDb: The default adapter for production.
- Postgres: Can be enabled for production and/or tests.
- ChromaDb: A vector-based database for semantic search.
Stores in Development
During development, by default, the Stores
layer will utilize a MongoDb
in memory store called NeDb
. While this database is no longer supported, it provides enough functionality to work for testing and development purposes.
Seeding Data
When you are in development, you may want to seed your store with data so you can test against it. Luckily, that’s a pretty easy thing to do! Let’s walk through it!
For this scenario, we’re going to ensure that our listener
returns the expected results from the Database. We’ll start with some listener tests that have already been created.
Test 1: Add the @seed(...) decorator
import { AbstractSpruceFixtureTest } from '@sprucelabs/spruce-test-fixtures'
import { test } from '@sprucelabs/test-utils'
import { crudAssert } from '@sprucelabs/spruce-crud-utils'
export default class RootSkillViewTest extends AbstractSpruceFixtureTest {
@test()
protected static async rendersMaster() {
const vc = this.views.Controller('eightbitstories.root', {})
crudAssert.skillViewRendersMasterView(]vc)
}
}
Custom Data
Stores in Production
Chroma Data Store
Give your skill the ability to store and retrieve data from a Chroma database for vector based searching. This gives your Data Store the ability to handle semantic and nearest neighbor searches.
Running Chroma
- Clone the
@sprucelabs/chroma-data-store
repository - cd into the repository
- Run
yarn start.chroma.docker
Setting an embedding model
By default , the ChromaDabatase class will use llama3.2 hosted through Ollama to generate embeddings
Installing Ollama
- Visit https://ollama.com
- Click “Download”
- Select your OS
Installing Llama3.2
Llama 3.2 is the newest version of Llama (as of this writing) that supports embeddings.
- Inside of terminal, run
ollama run llama3.2
- You should be able to visit http://localhost:11434/api/embeddings and get a 404 response (this is because the route only accepts POST requests)
Connecting to Chroma
Here are the steps to configure your skill to use ChromaDatabase
Step 1: Installing the Chroma Adapter
Inside your skill’s directory run:
yarn install @sprucelabs/chroma-data-store
Step 2: Enabling the adapter
Inside your skill’s directory run:
Coming soon.
Improving embeddings with nomic-embed-text
We have seen significantly better search performance when using nomic-embed-text
to generate embeddings.
Step 1: Installing nomic-embed-text
Run the following in your terminal:
ollama run nomic-embed-text
Step 2: Configuring nomic-embed-text in your skill
Add the following to your skill’s .env
:
CHROMA_EMBEDDING_MODEL="nomic-embed-text"