What is Scraps?
Scraps is a portable CLI knowledge hub for managing interconnected Markdown documentation with Wiki-link notation.
More details in here.
Getting Started
You can refer to the Getting Started document to quickly begin using Scraps.
Sample page in Japanese is here.
Sort by - committed date
-
ConfigurationConfiguration is managed by Config.toml in the Scraps project. Quick Setup Guide To get started with your Scraps site, you need to edit at least two required fields in your Config.toml file: Step 1: Edit base_url Replace the placeholder URL with your actual site URL: base_url = "https://yourusername.github.io/your-repository/" Step 2: Set your site title Add your desired site title: title = "My Knowledge Base" Configuration Variables Only the base_url and title variables are required. Everything else is optional. All configuration variables used by Scraps and their default values are listed below. # The site base url base_url = "https://username.github.io/repository-name/" # The site title title = "" # The scraps directory path relative to this Config.toml (optional, default=scraps) scraps_dir = "scraps" # The site language (compliant with iso639-1, default=en) lang_code = "en" # The site description (optional) description = "" # The site favicon in the form of png file URL (optional) favicon = "" # The site timezone (optional, default=UTC) timezone = "UTC" # The site color scheme (optional, default=os_setting, choices=os_setting or only_light or only_dark) color_scheme = "os_setting" # Build a search index with the Fuse JSON and display search UI (optional, default=true, choices=true or false) build_search_index = true # Scraps sort key choice on index page (optional, default=committed_date, choices=committed_date or linked_count) sort_key = "committed_date" # Scraps pagination on index page(optional, default=no pagination) paginate_by = 20
-
Getting StartedSetup Install Scraps Follow the Installation guide to install Scraps on your system Initialize Project Create a new Scraps project using Init:❯ scraps init my-knowledge-base ❯ cd my-knowledge-base Configure Project Open Config.toml in your project directory Edit the required fields:base_url = "https://yourusername.github.io/your-repository/" title = "My Knowledge Base" Customize other Configuration options as needed Content Creation Write Markdown Files Create Markdown files in the /scraps directory Use CommonMark specification and GitHub-flavored Markdown Add Internal Links Connect documents using Internal Link syntax: [[Page Name]] for simple links [[Page Name|Custom Text]] for custom link text Enhance Content Add Mermaid diagrams for visual representations Use Autolink functionality for external links Organize with context folders when needed Build and Preview Generate Site Use Build to generate static site files:❯ scraps build Preview Locally Use Serve for local preview and debugging:❯ scraps serve Deploy Deploy to platforms like GitHub Pages when ready AI Integration MCP Server: Enable AI assistant integration using MCP Server for intelligent search and content assistance
-
GitHub Pages#Deployment Custom actions are available to deploy Scraps to Github Pages. scraps-deploy-action YAML file Prepare a yaml file under .github/workflows/ like this name: Deploy scraps github pages on: push: branches: - main paths: - 'scraps/**' jobs: build: runs-on: ubuntu-latest steps: - name: checkout uses: actions/checkout@v5 with: fetch-depth: 0 # For scraps git commited date - name: build_and_deploy uses: boykush/scraps-deploy-action@v2 env: # Target branch PAGES_BRANCH: gh-pages TOKEN: ${{ secrets.GITHUB_TOKEN }} GitHub settings Set up GitHub Pages for the repository. Build and deployment parameter as follows. Source: Deploy from a branch Branch: gh-pages
-
What is Scraps?
-
MCP ServerScraps includes comprehensive Model Context Protocol server functionality, enabling AI assistants like Claude Code to directly interact with your Scraps knowledge base. Quick Start with Claude Code The fastest way to get started is with Claude Code. Add Scraps as a Model Context Protocol server with a single command: claude mcp add scraps -- scraps mcp serve --path ~/path/to/your/scraps/project/ Replace ~/path/to/your/scraps/project/ with the actual path to your Scraps project directory. Available Tools search_scraps: Search through your Scraps content with natural language queries list_tags: List all available tags in your Scraps repository lookup_scrap_links: Find outbound wiki links from a specific scrap lookup_scrap_backlinks: Find scraps that link to a specific scrap lookup_tag_backlinks: Find all scraps that reference a specific tag
-
CLI/MCP Serve#CLI ❯ scraps mcp serve This command starts an MCP (Model Context Protocol) server that enables AI assistants like Claude Code to directly interact with your Scraps knowledge base. Examples # Basic MCP server ❯ scraps mcp serve # Serve from specific directory ❯ scraps mcp serve --path /path/to/project The MCP server provides tools for AI assistants to search through your content and list available tags, enabling intelligent assistance with your documentation. For more details, see MCP Server.
-
Color Scheme
-
CLI/Build#CLI ❯ scraps build This command processes Markdown files from the /scraps directory and generates a static website. Source Structure ❯ tree scraps scraps ├── Getting Started.md └── Documentation.md Generated Files The command generates the following files in the public directory: ❯ tree public public ├── index.html # Main page with scrap list ├── getting-started.html ├── documentation.html ├── main.css # Styling for the site └── search.json # Search index (if enabled) Each Markdown file is converted to a slugified HTML file. Additional files like index.html and main.css are generated to create a complete static website. Examples # Basic build ❯ scraps build # Build with verbose output ❯ scraps build --verbose # Build from specific directory ❯ scraps build --path /path/to/project After building, use Serve to preview your site locally.
-
Feature/Search#Search Search index format Scraps can build a search index using the Fuse JSON schema as shown below. [ { "title": "Search", "url": "http://127.0.0.1:1112/scraps/search.html" }, { "title": "Overview", "url": "http://127.0.0.1:1112/scraps/overview.html" }, ... ] Search libraries Scraps content perform searches with fuse.js using an index. We are considering WASM solutions like tinysearch for future performance improvements in our deployment environment. Configuration If you are not using the search function, please modify your Config.toml as follows. See the Configuration page for details. # Build a search index with the Fuse JSON and display search UI (optional, default=true, choices=true or false) build_search_index = false
-
CLI/Template#CLI #Templates ❯ scraps template This command generates scrap files from Markdown templates located in the /templates directory. Commands List Templates ❯ scraps template list Lists all available templates in the /templates directory. Example output: daily_note book meeting project Generate Scrap from Template ❯ scraps template generate <TEMPLATE_NAME> [OPTIONS] Generates a scrap file from the specified template. Examples # List available templates ❯ scraps template list # Generate from template with metadata-specified title ❯ scraps template generate daily_note # Generate with command-line title ❯ scraps template generate meeting -t "Weekly Standup" # Generate with environment variables ❯ TITLE="My Book Review" scraps template generate book References Template features and syntax: Templates Template samples: Sample templates