Setup RAGS-AI
Get RAGS-AI running on your Mac in under 10 minutes. Follow these steps carefully.
Requires macOS 12+ • Terminal access • ~10GB disk space
0Prerequisites
Before you start, make sure you have:
1Install Homebrew
Homebrew is a package manager for macOS. Skip this step if you already have it installed.
Check if Homebrew is installed:
brew --versionIf not installed, run this command:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"Follow the on-screen instructions after installation. You may need to add Homebrew to your PATH.
2Install Ollama
Ollama runs local LLMs on your machine. This is the brain of RAGS-AI.
Install via Homebrew:
brew install ollamaOr download directly from:
https://ollama.aiVerify installation:
ollama --version3Download AI Model
Pull a local LLM model. We recommend starting with llama3.2 for best performance.
Recommended model (4.7GB):
ollama pull llama3.2Alternative models:
ollama pull phi32.3GB • Smaller, faster, good for low RAM
ollama pull llama3.2:1b1.3GB • Lightweight, basic tasks
ollama pull llava4.7GB • Vision capable (for camera features)
Download time depends on your internet speed. Models are stored locally and only downloaded once.
4Start Ollama Server
Ollama needs to run as a background server. Keep this running while using RAGS-AI.
Start the server:
ollama serveTest if it's working:
curl http://localhost:11434Should return: "Ollama is running"
Tip: Open a new terminal tab/window for the next steps. Keep Ollama running in the background.
5Install Node.js
RAGS-AI backend requires Node.js 18 or later.
Check if Node.js is installed:
node --versionIf not installed or version is below 18:
brew install node6Clone & Setup RAGS-AI
Clone the repository and install dependencies.
Clone the repository:
git clone https://github.com/raghavshahhh/RAGS-AI.gitNavigate to the project:
cd RAGS-AIInstall dependencies:
npm installRun the setup script (if available):
npm run setup7Run RAGS-AI
Make sure Ollama is running in another terminal, then start RAGS-AI.
Start RAGS-AI:
npm run startOr for development mode:
npm run devYou're all set!
RAGS-AI should now be running. Try saying "Hey RAGS" to activate voice commands.
Quick Reference
All commands in order:
# 1. Install Homebrew (if needed)
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# 2. Install Ollama
brew install ollama
# 3. Pull AI model
ollama pull llama3.2
# 4. Start Ollama server (keep running)
ollama serve
# 5. Install Node.js (if needed)
brew install node
# 6. Clone and setup RAGS-AI
git clone https://github.com/raghavshahhh/RAGS-AI.git
cd RAGS-AI
npm install
# 7. Run RAGS-AI
npm run start
Common Issues
Ollama command not found
Make sure Homebrew is in your PATH. Run: export PATH="/opt/homebrew/bin:$PATH" and try again.
Model download stuck or slow
Large models take time. Check your internet connection. You can also try a smaller model like phi3.
Port 11434 already in use
Another Ollama instance might be running. Kill it with: pkill ollama, then restart.
npm install fails
Try clearing npm cache: npm cache clean --force, then delete node_modules and try again.
Voice commands not working
Check microphone permissions in System Preferences > Privacy & Security > Microphone.
RAGS-AI crashes on startup
Make sure Ollama server is running first. Check the console for specific error messages.
Still Stuck? Get Help
Check the GitHub issues or open a new one. The community is here to help.