Step-by-Step Guide

Setup RAGS-AI

Get RAGS-AI running on your Mac in under 10 minutes. Follow these steps carefully.

Requires macOS 12+ • Terminal access • ~10GB disk space

0Prerequisites

Before you start, make sure you have:

macOS 12 (Monterey) or later
8GB RAM minimum (16GB recommended)
10GB+ free disk space
Terminal / Command line access
Admin access to install software
Microphone (for voice commands)(optional)

1Install Homebrew

Homebrew is a package manager for macOS. Skip this step if you already have it installed.

Check if Homebrew is installed:

brew --version

If not installed, run this command:

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

Follow the on-screen instructions after installation. You may need to add Homebrew to your PATH.

2Install Ollama

Ollama runs local LLMs on your machine. This is the brain of RAGS-AI.

Install via Homebrew:

brew install ollama

Or download directly from:

https://ollama.ai

Verify installation:

ollama --version

3Download AI Model

Pull a local LLM model. We recommend starting with llama3.2 for best performance.

Recommended model (4.7GB):

ollama pull llama3.2

Alternative models:

ollama pull phi3

2.3GBSmaller, faster, good for low RAM

ollama pull llama3.2:1b

1.3GBLightweight, basic tasks

ollama pull llava

4.7GBVision capable (for camera features)

Download time depends on your internet speed. Models are stored locally and only downloaded once.

4Start Ollama Server

Ollama needs to run as a background server. Keep this running while using RAGS-AI.

Start the server:

ollama serve

Test if it's working:

curl http://localhost:11434

Should return: "Ollama is running"

Tip: Open a new terminal tab/window for the next steps. Keep Ollama running in the background.

5Install Node.js

RAGS-AI backend requires Node.js 18 or later.

Check if Node.js is installed:

node --version

If not installed or version is below 18:

brew install node

6Clone & Setup RAGS-AI

Clone the repository and install dependencies.

Clone the repository:

git clone https://github.com/raghavshahhh/RAGS-AI.git

Navigate to the project:

cd RAGS-AI

Install dependencies:

npm install

Run the setup script (if available):

npm run setup

7Run RAGS-AI

Make sure Ollama is running in another terminal, then start RAGS-AI.

Start RAGS-AI:

npm run start

Or for development mode:

npm run dev

You're all set!

RAGS-AI should now be running. Try saying "Hey RAGS" to activate voice commands.

Quick Reference

All commands in order:

# 1. Install Homebrew (if needed)

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

# 2. Install Ollama

brew install ollama

# 3. Pull AI model

ollama pull llama3.2

# 4. Start Ollama server (keep running)

ollama serve

# 5. Install Node.js (if needed)

brew install node

# 6. Clone and setup RAGS-AI

git clone https://github.com/raghavshahhh/RAGS-AI.git

cd RAGS-AI

npm install

# 7. Run RAGS-AI

npm run start

Common Issues

Ollama command not found

Make sure Homebrew is in your PATH. Run: export PATH="/opt/homebrew/bin:$PATH" and try again.

Model download stuck or slow

Large models take time. Check your internet connection. You can also try a smaller model like phi3.

Port 11434 already in use

Another Ollama instance might be running. Kill it with: pkill ollama, then restart.

npm install fails

Try clearing npm cache: npm cache clean --force, then delete node_modules and try again.

Voice commands not working

Check microphone permissions in System Preferences > Privacy & Security > Microphone.

RAGS-AI crashes on startup

Make sure Ollama server is running first. Check the console for specific error messages.

Still Stuck? Get Help

Check the GitHub issues or open a new one. The community is here to help.