- OS: Windows 10/11 or Linux (Ubuntu 20.04 or newer)
- CPU: Intel Core i5/AMD Ryzen 5 or better
- RAM: 8GB
- Storage: 4GB free disk space
- Python: 3.9 or newer
- OS: Windows 11 or Linux (Ubuntu 22.04 or newer)
- CPU: Intel Core i7/AMD Ryzen 7 or better
- RAM: 16GB
- Storage: 8GB free disk space
- GPU: NVIDIA RTX 2060 6GB or better (with CUDA support)
- Python: 3.9 or newer
Note: NVIDIA GPU with CUDA support is recommended for optimal performance with AI features, Stable Diffusion, and audio generation.
- Open PowerShell.
- Install Python:
winget install -e --id Python.Python.3.9
- Follow the installer instructions.
- Ensure Python is accessible:
python --version
- Open a terminal.
- Install Python:
sudo apt-get update -y sudo apt-get install -y python3
- Verify the installation:
python3 --version
- Open PowerShell.
- Install Git:
winget install -e --id Git.Git
- Verify Git installation:
git --version
- Open a terminal.
- Install Git:
sudo apt-get update -y sudo apt-get install -y git
- Verify Git installation:
git --version
- Clone or fork the repository:
If you forked the repository:
git clone https://github.com/Axlfc/RuneScript.git
git clone https://github.com/your_git_username/RuneScript.git
- Navigate to the project directory:
cd ScriptsEditor
- Create a virtual environment:
- On Windows:
python -m venv .venv
- On macOS and Linux:
python3 -m venv .venv
- On Windows:
- Activate the virtual environment:
- On Windows:
.\.venv\Scripts\activate
- On macOS and Linux:
source .venv/bin/activate
- On Windows:
- Install the required dependencies:
- On Windows:
.\venv\Scripts\pip install -r requirements.txt .\venv\Scripts\pip install -r src/models/requirements.txt
- On macOS and Linux:
venv/bin/pip install -r requirements.txt venv/bin/pip install -r src/models/requirements.txt
- On Windows:
To use the stable-audio-open-1.0
model for audio generation in RuneScript:
-
Create a Hugging Face Account:
- Sign up here.
-
Request Access to the Model:
- Visit the stable-audio-open-1.0 model page and request access.
-
Generate an API Token:
- Generate an API token in your Hugging Face profile settings.
-
Download the Model Files:
-
Organize Models in the Repository:
mkdir -p src/models/model/text mkdir -p src/models/model/image mkdir -p src/models/model/audio
- Place
.gguf
files insrc/models/model/text
. - Place Stable Diffusion models in
src/models/model/image
. - Place Stable Audio models in
src/models/model/audio
.
- Place
- Place a valid
.gguf
file insrc/models/model/text
. - Start the AI assistant server:
.\.venv\Scripts\python.exe -m llama_cpp.server --port 8004 --model .\src\models\model\qwen2.5-coder-1.5b-q8_0.gguf
- On Windows:
.\.venv\Scripts\python main.py
- On macOS and Linux:
.venv/bin/python main.py