Scraps is a portable CLI knowledge hub for managing interconnected Markdown documentation with Wiki-link notation.
Learn more: What is Scraps?
Documentation
This documentation follows the Diátaxis framework:
- Tutorials - Learn Scraps: Getting Started
- How-to Guides - Solve problems: Deploy to GitHub Pages, Setup LSP, Use Templates
- Reference - Look up details: Build, Configuration, Normal Link
- Explanation - Understand concepts: What is Scraps?, Search Architecture
Browse by topic: #CLI #Wiki-Links #Markdown
Sort by - committed date
-
Reference/MCP Tools#Integration This reference documents the MCP (Model Context Protocol) tools provided by Scraps for AI assistant integration. search_scraps Search through your Scraps content with natural language queries using fuzzy matching. Parameters: query (string, required): Search query to match against scrap titles and body content num (integer, optional): Maximum number of results to return (default: 100) logic (string, optional): Search logic for combining multiple keywords: "or" (default): Any keyword can match "and": All keywords must match Returns: results: Array of matching scraps with the following fields: title: Scrap title ctx: Context folder path (null if in root) md_text: Full Markdown content count: Total number of matches found Examples: {"query": "rust cli", "logic": "and"} - Returns scraps containing both “rust” AND “cli” {"query": "rust cli", "logic": "or"} - Returns scraps containing “rust” OR “cli” list_tags List all available tags in your Scraps repository with their backlink counts, sorted by popularity. Parameters: None Returns: Array of tags with the following fields: title: Tag name backlinks_count: Number of scraps referencing this tag lookup_scrap_links Find outbound wiki links from a specific scrap. Returns all scraps that the specified scrap links to. Parameters: title (string, required): Title of the scrap to get links for ctx (string, optional): Context if the scrap has one Returns: Array of linked scraps with their full content. lookup_scrap_backlinks Find inbound wiki links (backlinks) to a specific scrap. Returns all scraps that link to the specified scrap. Parameters: title (string, required): Title of the scrap to get backlinks for ctx (string, optional): Context if the scrap has one Returns: Array of scraps that link to the specified scrap, with their full content. lookup_tag_backlinks Find all scraps that reference a specific tag. Parameters: tag (string, required): Tag name to get backlinks for Returns: Array of scraps that reference the specified tag, with their full content. Notes All search and lookup operations are performed against the current state of your Scraps repository Fuzzy matching is used for search queries to improve discoverability Results include the full Markdown content of matching scraps The MCP server must be running for these tools to be available to your AI assistant For setup instructions, see Integrate with AI Assistants.
-
Tutorial/InstallationYou can find the latest version on GitHub Releases. Requirements The git command is required for features. Cargo ❯ cargo install scraps macOS / Linux (Homebrew) ❯ brew install boykush/tap/scraps GitHub Releases Download the binary for your platform and place it in your PATH: # macOS (Apple Silicon) ❯ curl -sL https://github.com/boykush/scraps/releases/latest/download/scraps-aarch64-apple-darwin.tar.gz | tar xz # macOS (Intel) ❯ curl -sL https://github.com/boykush/scraps/releases/latest/download/scraps-x86_64-apple-darwin.tar.gz | tar xz # Linux (x86_64) ❯ curl -sL https://github.com/boykush/scraps/releases/latest/download/scraps-x86_64-unknown-linux-gnu.tar.gz | tar xz # Linux (ARM64) ❯ curl -sL https://github.com/boykush/scraps/releases/latest/download/scraps-aarch64-unknown-linux-gnu.tar.gz | tar xz Then move the binary to a directory in your PATH: ❯ sudo mv scraps /usr/local/bin/
-
Tutorial/Getting StartedThis guide covers using Scraps as a static site generator (SSG). Setup Install Scraps Follow the Installation guide to install Scraps on your system Initialize Project Create a new Scraps project using Init: ❯ scraps init my-knowledge-base ❯ cd my-knowledge-base Configure Project Follow Configuration to set up your .scraps.toml Content Creation Write Markdown Files Create Markdown files in the /scraps directory Use CommonMark and GitHub-flavored Markdown Add Internal Links Connect documents using Normal Link syntax: [[Page Name]] for simple links [[Page Name|Custom Text]] for custom link text Enhance Content Add Mermaid diagrams for visual representations Use Autolink functionality for external links Organize with context folders when needed Build and Preview Generate Site Use Build to generate static site files: ❯ scraps build Preview Locally Use Serve for local preview and debugging: ❯ scraps serve Deploy Deploy to platforms like Deploy to GitHub Pages when ready AI Integration MCP Server: Enable AI assistant integration using Integrate with AI Assistants for intelligent search and content assistance
-
Reference/Configuration#Configuration Configuration is managed by .scraps.toml in the Scraps project. Configuration Structure The configuration file has two sections: Root level: Contains scraps_dir and timezone for general settings [ssg] section: Contains all static site generator settings The [ssg] section is required for build and serve commands. Other commands like tag, mcp, and template can work without this section. Within the [ssg] section, base_url and title are required fields. Configuration Variables All configuration variables used by Scraps and their default values are listed below. # The scraps directory path relative to this .scraps.toml (optional, default=scraps) scraps_dir = "scraps" # The site timezone (optional, default=UTC) timezone = "UTC" # SSG (Static Site Generator) configuration section # This section is required for build and serve commands [ssg] # The site base url (required) base_url = "https://username.github.io/repository-name/" # The site title (required) title = "" # The site language (compliant with iso639-1, default=en) lang_code = "en" # The site description (optional) description = "" # The site favicon in the form of png file URL (optional) favicon = "" # The site color scheme # (optional, default=os_setting, choices=os_setting or only_light or only_dark) color_scheme = "os_setting" # Build a search index with the Fuse JSON and display search UI # (optional, default=true, choices=true or false) build_search_index = true # Scraps sort key choice on index page # (optional, default=committed_date, choices=committed_date or linked_count) sort_key = "committed_date" # Scraps pagination on index page(optional, default=no pagination) paginate_by = 20
-
Explanation/Search Architecture#Static Site Search index format Scraps can build a search index using the Fuse JSON schema as shown below. [ { "title": "Search", "url": "http://127.0.0.1:1112/scraps/search.html" }, { "title": "Overview", "url": "http://127.0.0.1:1112/scraps/overview.html" }, ... ] Search libraries Scraps content perform searches with fuse.js using an index. We are considering WASM solutions like tinysearch for future performance improvements in our deployment environment. Configuration If you are not using the search function, please modify your .scraps.toml as follows. See the Configuration page for details. [ssg] # Build a search index with the Fuse JSON and display search UI (optional, default=true, choices=true or false) build_search_index = false
-
Reference/Color Scheme
-
Explanation/Template System#Templates Generate scrap files from predefined Markdown templates for efficient content creation. Basic Usage Create template files in /templates directory Run generate scrap on command-line Template Syntax Templates use Tera template engine with TOML metadata: +++ title = "{{ now() | date(timezone=timezone) }}" +++ # Content goes here Available Variables timezone - Access .scraps.toml timezone setting All Tera built-in functions Examples See Use Templates for ready-to-use templates. For CLI commands, see Template.
-
Reference/Init#CLI ❯ scraps init <PROJECT_NAME> This command initializes a new Scraps project. It creates the following structure: ❯ tree -a -L 1 . ├── .gitignore # Git ignore patterns for Scraps projects ├── .scraps.toml # Project configuration file └── scraps # Directory for your Markdown files Examples # Initialize new project ❯ scraps init my-knowledge-base ❯ cd my-knowledge-base # Initialize with specific path ❯ scraps init docs --path /path/to/workspace After initializing the project, proceed to Build to generate your static site.
-
How-to/Deploy to GitHub Pages#Deployment Custom actions are available to deploy Scraps to Github Pages. Repository: scraps-deploy-action Marketplace: Scraps Deploy to Pages YAML file Prepare a yaml file under .github/workflows/ like this name: Deploy scraps github pages on: push: branches: - main paths: - 'scraps/**' jobs: build: runs-on: ubuntu-latest steps: - name: checkout uses: actions/checkout@v6 with: fetch-depth: 0 # For scraps git commited date - name: build_and_deploy uses: boykush/scraps-deploy-action@v3 with: token: ${{ secrets.GITHUB_TOKEN }} pages-branch: gh-pages GitHub settings Set up GitHub Pages for the repository. Build and deployment parameter as follows. Source: Deploy from a branch Branch: gh-pages
-
How-to/Install Claude Code Plugin#Integration This guide shows you how to enable the Scraps MCP (Model Context Protocol) server plugin in Claude Code. Installation Step 1: Add the Plugin Marketplace First, add the Scraps plugin marketplace: claude plugin marketplace add boykush/scraps This registers the Scraps plugin catalog with Claude Code. Step 2: Enable the Plugin Add the following to your project’s .claude/settings.json: { "enabledPlugins": { "mcp-server@scraps-claude-code-plugins": true } } The plugin will automatically use the current directory as your Scraps project path. Configuration Custom Project Path (Optional) To specify a different Scraps project path, set the SCRAPS_PROJECT_PATH environment variable: { "env": { "SCRAPS_PROJECT_PATH": "/path/to/your/scraps/project" }, "enabledPlugins": { "mcp-server@scraps-claude-code-plugins": true } }