What is Scraps?
Scraps is a static site generator that brings developer-friendly workflows to documentation, using Markdown files with simple Wiki-link notation.
More details in here.
Getting Started
You can refer to the Getting Started document to quickly begin using Scraps.
Sample page in Japanese is here.
Sort by - committed date
-
MCP ServerScraps includes comprehensive Model Context Protocol server functionality, enabling AI assistants like Claude Code to directly interact with your Scraps knowledge base. Quick Start with Claude Code The fastest way to get started is with Claude Code. Add Scraps as a Model Context Protocol server with a single command: claude mcp add scraps -- scraps mcp serve --path ~/path/to/your/scraps/project/ Replace ~/path/to/your/scraps/project/ with the actual path to your Scraps project directory. Available Tools search_scraps: Search through your Scraps content with natural language queries list_tags: List all available tags in your Scraps repository
-
Getting StartedSetup Install Scraps Follow the Installation guide to install Scraps on your system Initialize Project Create a new Scraps project using Init:❯ scraps init my-knowledge-base ❯ cd my-knowledge-base Configure Project Customize the Configuration in Config.toml Set your site title, base URL, and other preferences Content Creation Write Markdown Files Create Markdown files in the /scraps directory Use CommonMark specification and GitHub-flavored Markdown Add Internal Links Connect documents using Internal Link syntax: [[Page Name]] for simple links [[Page Name|Custom Text]] for custom link text Enhance Content Add Mermaid diagrams for visual representations Use Autolink functionality for external links Organize with context folders when needed Build and Preview Generate Site Use Build to generate static site files:❯ scraps build Preview Locally Use Serve for local preview and debugging:❯ scraps serve Deploy Deploy to platforms like GitHub Pages when ready AI Integration MCP Server: Enable AI assistant integration using MCP Server for intelligent search and content assistance
-
CLI/MCP Serve#CLI ❯ scraps mcp serve This command starts an MCP (Model Context Protocol) server that enables AI assistants like Claude Code to directly interact with your Scraps knowledge base. Examples # Basic MCP server ❯ scraps mcp serve # Serve from specific directory ❯ scraps mcp serve --path /path/to/project The MCP server provides tools for AI assistants to search through your content and list available tags, enabling intelligent assistance with your documentation. For more details, see MCP Server.
-
ConfigurationConfiguration is managed by Config.toml in the Scraps project. Only the base_url and title variables are required. Everything else is optional. All configuration variables used by Scraps and their default values are listed below. # The site base url base_url = "https://username.github.io/repository-name/" # The site title title = "" # The site language (compliant with iso639-1, default=en) lang_code = "en" # The site description (optional) description = "" # The site favicon in the form of png file URL (optional) favicon = "" # The site timezone (optional, default=UTC) timezone = "UTC" # The site color scheme (optional, default=os_setting, choices=os_setting or only_light or only_dark) color_scheme = "os_setting" # Build a search index with the Fuse JSON and display search UI (optional, default=true, choices=true or false) build_search_index = true # Scraps sort key choice on index page (optional, default=committed_date, choices=committed_date or linked_count) sort_key = "committed_date" # Scraps pagination on index page (optional, default=no pagination) paginate_by = 20 Common Configuration Examples Personal Knowledge Base base_url = "https://your-username.github.io/knowledge-base/" title = "My Knowledge Base" description = "Personal notes and documentation" sort_key = "committed_date" paginate_by = 50 Team Documentation base_url = "https://company.github.io/docs/" title = "Team Documentation" description = "Internal team knowledge and processes" lang_code = "en" timezone = "America/New_York" sort_key = "linked_count" color_scheme = "only_light" Minimal Setup base_url = "https://my-site.com/" title = "Simple Docs" build_search_index = false
-
Feature/Search#Search Search index format Scraps can build a search index using the Fuse JSON schema as shown below. [ { "title": "Search", "url": "http://127.0.0.1:1112/scraps/search.html" }, { "title": "Overview", "url": "http://127.0.0.1:1112/scraps/overview.html" }, ... ] Search libraries Scraps content perform searches with fuse.js using an index. We are considering WASM solutions like tinysearch for future performance improvements in our deployment environment. Configuration If you are not using the search function, please modify your Config.toml as follows. See the Configuration page for details. # Build a search index with the Fuse JSON and display search UI (optional, default=true, choices=true or false) build_search_index = false
-
CLI/Build#CLI ❯ scraps build This command processes Markdown files from the /scraps directory and generates a static website. Source Structure ❯ tree scraps scraps ├── Getting Started.md └── Documentation.md Generated Files The command generates the following files in the public directory: ❯ tree public public ├── index.html # Main page with scrap list ├── getting-started.html ├── documentation.html ├── main.css # Styling for the site └── search.json # Search index (if enabled) Each Markdown file is converted to a slugified HTML file. Additional files like index.html and main.css are generated to create a complete static website. Examples # Basic build ❯ scraps build # Build with verbose output ❯ scraps build --verbose # Build from specific directory ❯ scraps build --path /path/to/project After building, use Serve to preview your site locally.
-
Color Scheme
-
CLI/Serve#CLI ❯ scraps serve This command starts a local development server to preview your static site. The server automatically serves the files from the public directory at http://127.0.0.1:1112. Examples # Basic serve ❯ scraps serve # Serve from specific directory ❯ scraps serve --path /path/to/project Use this command to check how your site looks and functions before deployment.
-
CLI/Search#CLI ❯ scraps search <QUERY> This command searches through your Scraps content using fuzzy matching to find relevant information across your knowledge base. Examples # Basic search ❯ scraps search "markdown" # Limit results to 10 ❯ scraps search "documentation" --num 10 The search uses fuzzy matching across file names, content, and Wiki-link references, displaying results ranked by relevance.
-
CLI/Init#CLI ❯ scraps init <PROJECT_NAME> This command initializes a new Scraps project. It creates the following structure: ❯ tree -a -L 1 . ├── .gitignore # Git ignore patterns for Scraps projects ├── Config.toml # Project configuration file └── scraps # Directory for your Markdown files Examples # Initialize new project ❯ scraps init my-knowledge-base ❯ cd my-knowledge-base # Initialize with specific path ❯ scraps init docs --path /path/to/workspace After initializing the project, proceed to Build to generate your static site.