feat: 日常增量 - 小红书配图/舆情记录/日报/草稿归档
This commit is contained in:
7
skills/agent-browser-clawdbot/.clawhub/origin.json
Normal file
7
skills/agent-browser-clawdbot/.clawhub/origin.json
Normal file
@@ -0,0 +1,7 @@
|
||||
{
|
||||
"version": 1,
|
||||
"registry": "https://clawhub.ai",
|
||||
"slug": "agent-browser-clawdbot",
|
||||
"installedVersion": "0.1.0",
|
||||
"installedAt": 1776838206748
|
||||
}
|
||||
206
skills/agent-browser-clawdbot/SKILL.md
Normal file
206
skills/agent-browser-clawdbot/SKILL.md
Normal file
@@ -0,0 +1,206 @@
|
||||
---
|
||||
name: agent-browser
|
||||
description: Headless browser automation CLI optimized for AI agents with accessibility tree snapshots and ref-based element selection
|
||||
metadata: {"clawdbot":{"emoji":"🌐","requires":{"commands":["agent-browser"]},"homepage":"https://github.com/vercel-labs/agent-browser"}}
|
||||
---
|
||||
|
||||
# Agent Browser Skill
|
||||
|
||||
Fast browser automation using accessibility tree snapshots with refs for deterministic element selection.
|
||||
|
||||
## Why Use This Over Built-in Browser Tool
|
||||
|
||||
**Use agent-browser when:**
|
||||
- Automating multi-step workflows
|
||||
- Need deterministic element selection
|
||||
- Performance is critical
|
||||
- Working with complex SPAs
|
||||
- Need session isolation
|
||||
|
||||
**Use built-in browser tool when:**
|
||||
- Need screenshots/PDFs for analysis
|
||||
- Visual inspection required
|
||||
- Browser extension integration needed
|
||||
|
||||
## Core Workflow
|
||||
|
||||
```bash
|
||||
# 1. Navigate and snapshot
|
||||
agent-browser open https://example.com
|
||||
agent-browser snapshot -i --json
|
||||
|
||||
# 2. Parse refs from JSON, then interact
|
||||
agent-browser click @e2
|
||||
agent-browser fill @e3 "text"
|
||||
|
||||
# 3. Re-snapshot after page changes
|
||||
agent-browser snapshot -i --json
|
||||
```
|
||||
|
||||
## Key Commands
|
||||
|
||||
### Navigation
|
||||
```bash
|
||||
agent-browser open <url>
|
||||
agent-browser back | forward | reload | close
|
||||
```
|
||||
|
||||
### Snapshot (Always use -i --json)
|
||||
```bash
|
||||
agent-browser snapshot -i --json # Interactive elements, JSON output
|
||||
agent-browser snapshot -i -c -d 5 --json # + compact, depth limit
|
||||
agent-browser snapshot -s "#main" -i # Scope to selector
|
||||
```
|
||||
|
||||
### Interactions (Ref-based)
|
||||
```bash
|
||||
agent-browser click @e2
|
||||
agent-browser fill @e3 "text"
|
||||
agent-browser type @e3 "text"
|
||||
agent-browser hover @e4
|
||||
agent-browser check @e5 | uncheck @e5
|
||||
agent-browser select @e6 "value"
|
||||
agent-browser press "Enter"
|
||||
agent-browser scroll down 500
|
||||
agent-browser drag @e7 @e8
|
||||
```
|
||||
|
||||
### Get Information
|
||||
```bash
|
||||
agent-browser get text @e1 --json
|
||||
agent-browser get html @e2 --json
|
||||
agent-browser get value @e3 --json
|
||||
agent-browser get attr @e4 "href" --json
|
||||
agent-browser get title --json
|
||||
agent-browser get url --json
|
||||
agent-browser get count ".item" --json
|
||||
```
|
||||
|
||||
### Check State
|
||||
```bash
|
||||
agent-browser is visible @e2 --json
|
||||
agent-browser is enabled @e3 --json
|
||||
agent-browser is checked @e4 --json
|
||||
```
|
||||
|
||||
### Wait
|
||||
```bash
|
||||
agent-browser wait @e2 # Wait for element
|
||||
agent-browser wait 1000 # Wait ms
|
||||
agent-browser wait --text "Welcome" # Wait for text
|
||||
agent-browser wait --url "**/dashboard" # Wait for URL
|
||||
agent-browser wait --load networkidle # Wait for network
|
||||
agent-browser wait --fn "window.ready === true"
|
||||
```
|
||||
|
||||
### Sessions (Isolated Browsers)
|
||||
```bash
|
||||
agent-browser --session admin open site.com
|
||||
agent-browser --session user open site.com
|
||||
agent-browser session list
|
||||
# Or via env: AGENT_BROWSER_SESSION=admin agent-browser ...
|
||||
```
|
||||
|
||||
### State Persistence
|
||||
```bash
|
||||
agent-browser state save auth.json # Save cookies/storage
|
||||
agent-browser state load auth.json # Load (skip login)
|
||||
```
|
||||
|
||||
### Screenshots & PDFs
|
||||
```bash
|
||||
agent-browser screenshot page.png
|
||||
agent-browser screenshot --full page.png
|
||||
agent-browser pdf page.pdf
|
||||
```
|
||||
|
||||
### Network Control
|
||||
```bash
|
||||
agent-browser network route "**/ads/*" --abort # Block
|
||||
agent-browser network route "**/api/*" --body '{"x":1}' # Mock
|
||||
agent-browser network requests --filter api # View
|
||||
```
|
||||
|
||||
### Cookies & Storage
|
||||
```bash
|
||||
agent-browser cookies # Get all
|
||||
agent-browser cookies set name value
|
||||
agent-browser storage local key # Get localStorage
|
||||
agent-browser storage local set key val
|
||||
```
|
||||
|
||||
### Tabs & Frames
|
||||
```bash
|
||||
agent-browser tab new https://example.com
|
||||
agent-browser tab 2 # Switch to tab
|
||||
agent-browser frame @e5 # Switch to iframe
|
||||
agent-browser frame main # Back to main
|
||||
```
|
||||
|
||||
## Snapshot Output Format
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"data": {
|
||||
"snapshot": "...",
|
||||
"refs": {
|
||||
"e1": {"role": "heading", "name": "Example Domain"},
|
||||
"e2": {"role": "button", "name": "Submit"},
|
||||
"e3": {"role": "textbox", "name": "Email"}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Always use `-i` flag** - Focus on interactive elements
|
||||
2. **Always use `--json`** - Easier to parse
|
||||
3. **Wait for stability** - `agent-browser wait --load networkidle`
|
||||
4. **Save auth state** - Skip login flows with `state save/load`
|
||||
5. **Use sessions** - Isolate different browser contexts
|
||||
6. **Use `--headed` for debugging** - See what's happening
|
||||
|
||||
## Example: Search and Extract
|
||||
|
||||
```bash
|
||||
agent-browser open https://www.google.com
|
||||
agent-browser snapshot -i --json
|
||||
# AI identifies search box @e1
|
||||
agent-browser fill @e1 "AI agents"
|
||||
agent-browser press Enter
|
||||
agent-browser wait --load networkidle
|
||||
agent-browser snapshot -i --json
|
||||
# AI identifies result refs
|
||||
agent-browser get text @e3 --json
|
||||
agent-browser get attr @e4 "href" --json
|
||||
```
|
||||
|
||||
## Example: Multi-Session Testing
|
||||
|
||||
```bash
|
||||
# Admin session
|
||||
agent-browser --session admin open app.com
|
||||
agent-browser --session admin state load admin-auth.json
|
||||
agent-browser --session admin snapshot -i --json
|
||||
|
||||
# User session (simultaneous)
|
||||
agent-browser --session user open app.com
|
||||
agent-browser --session user state load user-auth.json
|
||||
agent-browser --session user snapshot -i --json
|
||||
```
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
npm install -g agent-browser
|
||||
agent-browser install # Download Chromium
|
||||
agent-browser install --with-deps # Linux: + system deps
|
||||
```
|
||||
|
||||
## Credits
|
||||
|
||||
Skill created by Yossi Elkrief ([@MaTriXy](https://github.com/MaTriXy))
|
||||
|
||||
agent-browser CLI by [Vercel Labs](https://github.com/vercel-labs/agent-browser)
|
||||
6
skills/agent-browser-clawdbot/_meta.json
Normal file
6
skills/agent-browser-clawdbot/_meta.json
Normal file
@@ -0,0 +1,6 @@
|
||||
{
|
||||
"ownerId": "kn7amrtkn0tjk2r2yxf3hjgp0s7zn6g4",
|
||||
"slug": "agent-browser-clawdbot",
|
||||
"version": "0.1.0",
|
||||
"publishedAt": 1769032854381
|
||||
}
|
||||
27
skills/agent-browser/SKILL.md
Normal file
27
skills/agent-browser/SKILL.md
Normal file
@@ -0,0 +1,27 @@
|
||||
---
|
||||
name: agent-browser
|
||||
description: Automates browser interactions for web testing, form filling, screenshots, and data extraction. Use when the user needs to navigate websites, interact with web pages, fill forms, take screenshots, test web applications, or extract information from web pages.
|
||||
allowed-tools: Bash(agent-browser:*)
|
||||
---
|
||||
|
||||
# Browser Automation with agent-browser
|
||||
|
||||
## Quick start
|
||||
agent-browser open <url> # Navigate to page
|
||||
agent-browser snapshot -i # Get interactive elements with refs
|
||||
agent-browser click @e1 # Click element by ref
|
||||
agent-browser fill @e2 "text" # Fill input by ref
|
||||
agent-browser close # Close browser
|
||||
|
||||
## Core workflow
|
||||
1. Navigate: agent-browser open <url>
|
||||
2. Snapshot: agent-browser snapshot -i
|
||||
3. Interact using refs from the snapshot
|
||||
4. Re-snapshot after navigation or significant DOM changes
|
||||
|
||||
## Commands
|
||||
- agent-browser open <url> # Navigate to URL
|
||||
- agent-browser snapshot -i # Interactive elements only
|
||||
- agent-browser click @e1 # Click element
|
||||
- agent-browser fill @e2 "text" # Fill input
|
||||
- agent-browser screenshot # Take screenshot
|
||||
24
skills/auto-skill-hunter/SKILL.md
Normal file
24
skills/auto-skill-hunter/SKILL.md
Normal file
@@ -0,0 +1,24 @@
|
||||
---
|
||||
name: auto-skill-hunter
|
||||
description: Proactively discovers, ranks, and installs high-value ClawHub skills by mining unresolved user needs and agent. Use when the user asks to find new skills, explore ClawHub, or wants to expand agent capabilities.
|
||||
---
|
||||
|
||||
# Auto Skill Hunter
|
||||
|
||||
## Overview
|
||||
Automatically discovers, ranks, and installs high-value ClawHub skills by mining user needs and agent capability gaps.
|
||||
|
||||
## When to Use
|
||||
- User asks to "find new skills"
|
||||
- User wants to expand agent capabilities
|
||||
- Task requires capability not currently available
|
||||
- User says "install a skill for X"
|
||||
|
||||
## Workflow
|
||||
1. Identify the user's unmet need
|
||||
2. Search ClawHub/skills registry for matching skills
|
||||
3. Rank by quality, popularity, and security
|
||||
4. Install top candidate with user confirmation
|
||||
|
||||
## Installation
|
||||
Skills install to `~/.openclaw/skills/` or `workspace/skills/`
|
||||
48
skills/blog-writer/SKILL.md
Normal file
48
skills/blog-writer/SKILL.md
Normal file
@@ -0,0 +1,48 @@
|
||||
---
|
||||
name: blog-writer
|
||||
description: This skill should be used when writing blog posts, articles, or long-form content. Use for drafting blog posts, thought leadership pieces, or any writing meant to reflect the writer's perspective on AI, productivity, sales, marketing, or technology topics.
|
||||
---
|
||||
|
||||
# Blog Writer
|
||||
|
||||
## Overview
|
||||
This skill enables writing blog posts and articles that capture the writer's distinctive voice and style.
|
||||
|
||||
## When to Use This Skill
|
||||
- User requests blog post or article writing
|
||||
- Drafting thought leadership content
|
||||
- Creating articles in a distinctive writer's voice
|
||||
|
||||
## Core Responsibilities
|
||||
1. **Follow Writing Style**: Match voice, word choice, structure
|
||||
2. **Incorporate Research**: Review and integrate provided materials
|
||||
3. **Follow User Instructions**: Adhere to specific requests for topic, angle
|
||||
4. **Produce Authentic Writing**: Create content in genuine writer's voice
|
||||
|
||||
## Workflow
|
||||
|
||||
### Phase 1: Gather Information
|
||||
- Topic or subject matter
|
||||
- Any specific angle or thesis to explore
|
||||
- Research materials, links, or notes (if available)
|
||||
- Target length preference (default: 800-1500 words)
|
||||
|
||||
### Phase 2: Draft the Content
|
||||
1. Start with a strong opening statement
|
||||
2. Use personal voice and first-person perspective
|
||||
3. Include relevant anecdotes or professional experience
|
||||
4. Structure with clear subheadings (###) every 2-3 paragraphs
|
||||
5. Keep paragraphs short (2-4 sentences)
|
||||
6. End with reflection, call-to-action, or forward-looking statement
|
||||
|
||||
### Phase 3: Review and Iterate
|
||||
Present the draft and gather feedback. Iterate until user confirms satisfaction.
|
||||
|
||||
### Phase 4: Publish
|
||||
Save to drafts/ folder and notify user for review.
|
||||
|
||||
## Output
|
||||
- Draft: `drafts/YYYY-MM-DD_<platform>_<title>.md`
|
||||
- Title: 3 alternatives (A/B/C)
|
||||
- Key takeaways: 3 bullet points
|
||||
- SEO keywords: 5-10
|
||||
77
skills/feed-to-md/SKILL.md
Normal file
77
skills/feed-to-md/SKILL.md
Normal file
@@ -0,0 +1,77 @@
|
||||
---
|
||||
name: feed-to-md
|
||||
title: Feed to Markdown
|
||||
description: Convert RSS or Atom feed URLs into Markdown using the bundled local converter script. Use this when a user asks to turn a feed URL into readable Markdown, optionally limiting items or writing to a file.
|
||||
metadata: {"clawdbot":{"emoji":"📰","requires":{"bins":["python3"]}}}
|
||||
---
|
||||
|
||||
# RSS/Atom to Markdown
|
||||
|
||||
Use this skill when the task is to convert an RSS/Atom feed URL into Markdown.
|
||||
|
||||
## What this skill does
|
||||
|
||||
- Converts a feed URL to Markdown via a bundled local script
|
||||
- Supports stdout output or writing to a Markdown file
|
||||
- Supports limiting article count and summary controls
|
||||
|
||||
## Inputs
|
||||
|
||||
- Required: RSS/Atom URL
|
||||
- Optional:
|
||||
- output path
|
||||
- max item count
|
||||
- template preset (`short` or `full`)
|
||||
|
||||
## Usage
|
||||
|
||||
Run the local script:
|
||||
|
||||
```bash
|
||||
python3 scripts/feed_to_md.py "<feed_url>"
|
||||
```
|
||||
|
||||
Write to file:
|
||||
|
||||
```bash
|
||||
python3 scripts/feed_to_md.py "https://example.com/feed.xml" --output feed.md
|
||||
```
|
||||
|
||||
Limit to 10 items:
|
||||
|
||||
```bash
|
||||
python3 scripts/feed_to_md.py "https://example.com/feed.xml" --limit 10
|
||||
```
|
||||
|
||||
Use full template with summaries:
|
||||
|
||||
```bash
|
||||
python3 scripts/feed_to_md.py "https://example.com/feed.xml" --template full
|
||||
```
|
||||
|
||||
## Security rules (required)
|
||||
|
||||
- Never interpolate raw user input into a shell string.
|
||||
- Always pass arguments directly to the script as separate argv tokens.
|
||||
- URL must be `http` or `https` and must not resolve to localhost/private addresses.
|
||||
- Every HTTP redirect target (and final URL) is re-validated and must also resolve to public IPs.
|
||||
- Output path must be workspace-relative and end in `.md`.
|
||||
- Do not use shell redirection for output; use `--output`.
|
||||
|
||||
Safe command pattern:
|
||||
|
||||
```bash
|
||||
cmd=(python3 scripts/feed_to_md.py "$feed_url")
|
||||
[[ -n "${output_path:-}" ]] && cmd+=(--output "$output_path")
|
||||
[[ -n "${limit:-}" ]] && cmd+=(--limit "$limit")
|
||||
[[ "${template:-short}" = "full" ]] && cmd+=(--template full)
|
||||
"${cmd[@]}"
|
||||
```
|
||||
|
||||
## Script options
|
||||
|
||||
- `-o, --output <file>`: write markdown to file
|
||||
- `--limit <number>`: max number of articles
|
||||
- `--no-summary`: exclude summaries
|
||||
- `--summary-max-length <number>`: truncate summary length
|
||||
- `--template <preset>`: `short` (default) or `full`
|
||||
290
skills/feed-to-md/scripts/feed_to_md.py
Normal file
290
skills/feed-to-md/scripts/feed_to_md.py
Normal file
@@ -0,0 +1,290 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Convert RSS/Atom feeds to Markdown with safe URL/path handling."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import html
|
||||
import ipaddress
|
||||
import pathlib
|
||||
import re
|
||||
import socket
|
||||
import sys
|
||||
import urllib.parse
|
||||
import urllib.request
|
||||
import xml.etree.ElementTree as ET
|
||||
|
||||
TAG_RE = re.compile(r"<[^>]+>")
|
||||
|
||||
|
||||
def normalize_text(value: str) -> str:
|
||||
text = html.unescape(value or "")
|
||||
text = TAG_RE.sub("", text)
|
||||
return " ".join(text.split()).strip()
|
||||
|
||||
|
||||
def validate_public_hostname(hostname: str, label: str) -> None:
|
||||
if hostname in {"localhost", "localhost.localdomain"}:
|
||||
raise ValueError(f"{label} uses localhost, which is not allowed")
|
||||
|
||||
try:
|
||||
addr_info = socket.getaddrinfo(hostname, None)
|
||||
except socket.gaierror as exc:
|
||||
raise ValueError(f"Unable to resolve host: {hostname}") from exc
|
||||
|
||||
for item in addr_info:
|
||||
ip_raw = item[4][0]
|
||||
ip = ipaddress.ip_address(ip_raw)
|
||||
if (
|
||||
ip.is_private
|
||||
or ip.is_loopback
|
||||
or ip.is_link_local
|
||||
or ip.is_multicast
|
||||
or ip.is_reserved
|
||||
or ip.is_unspecified
|
||||
):
|
||||
raise ValueError(f"{label} resolves to a non-public IP address")
|
||||
|
||||
|
||||
def validate_feed_url(raw_url: str, label: str = "Feed URL") -> str:
|
||||
parsed = urllib.parse.urlparse(raw_url)
|
||||
if parsed.scheme not in {"http", "https"}:
|
||||
raise ValueError(f"{label} must use http or https")
|
||||
if not parsed.hostname:
|
||||
raise ValueError(f"{label} must include a hostname")
|
||||
|
||||
hostname = parsed.hostname.strip().lower()
|
||||
validate_public_hostname(hostname, f"{label} host")
|
||||
|
||||
return parsed.geturl()
|
||||
|
||||
|
||||
def validate_output_path(raw_path: str) -> pathlib.Path:
|
||||
out_path = pathlib.Path(raw_path)
|
||||
if out_path.is_absolute():
|
||||
raise ValueError("Output path must be relative to the current workspace")
|
||||
if ".." in out_path.parts:
|
||||
raise ValueError("Output path must not contain '..'")
|
||||
if out_path.suffix.lower() != ".md":
|
||||
raise ValueError("Output path must end with .md")
|
||||
|
||||
root = pathlib.Path.cwd().resolve()
|
||||
target = (root / out_path).resolve()
|
||||
try:
|
||||
target.relative_to(root)
|
||||
except ValueError as exc:
|
||||
raise ValueError("Output path escapes the current workspace") from exc
|
||||
return target
|
||||
|
||||
|
||||
class PublicOnlyRedirectHandler(urllib.request.HTTPRedirectHandler):
|
||||
def redirect_request(self, req, fp, code, msg, headers, newurl): # noqa: D401
|
||||
redirected_url = urllib.parse.urljoin(req.full_url, newurl)
|
||||
validate_feed_url(redirected_url, "Redirect URL")
|
||||
return super().redirect_request(req, fp, code, msg, headers, newurl)
|
||||
|
||||
|
||||
def fetch_xml(url: str, timeout: int = 15) -> bytes:
|
||||
request = urllib.request.Request(
|
||||
url,
|
||||
headers={
|
||||
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36",
|
||||
"Accept": "application/rss+xml, application/atom+xml, application/xml, text/xml, */*",
|
||||
"Accept-Language": "en-US,en;q=0.9",
|
||||
},
|
||||
)
|
||||
opener = urllib.request.build_opener(PublicOnlyRedirectHandler())
|
||||
with opener.open(request, timeout=timeout) as response:
|
||||
final_url = response.geturl()
|
||||
validate_feed_url(final_url, "Final URL")
|
||||
return response.read()
|
||||
|
||||
|
||||
def namespace(tag: str) -> str | None:
|
||||
if tag.startswith("{") and "}" in tag:
|
||||
return tag[1:].split("}", 1)[0]
|
||||
return None
|
||||
|
||||
|
||||
def find_text(elem: ET.Element, path: str, ns: dict[str, str] | None = None) -> str:
|
||||
child = elem.find(path, ns or {})
|
||||
if child is None or child.text is None:
|
||||
return ""
|
||||
return normalize_text(child.text)
|
||||
|
||||
|
||||
def parse_rss(root: ET.Element) -> tuple[str, list[dict[str, str]]]:
|
||||
content_ns = {"content": "http://purl.org/rss/1.0/modules/content/"}
|
||||
channel = root.find("channel")
|
||||
if channel is None:
|
||||
raise ValueError("Invalid RSS feed: missing channel")
|
||||
|
||||
feed_title = find_text(channel, "title") or "Feed"
|
||||
entries: list[dict[str, str]] = []
|
||||
for item in channel.findall("item"):
|
||||
title = find_text(item, "title") or "Untitled"
|
||||
link = find_text(item, "link")
|
||||
summary = find_text(item, "content:encoded", content_ns) or find_text(
|
||||
item, "description"
|
||||
)
|
||||
published = find_text(item, "pubDate")
|
||||
entries.append(
|
||||
{
|
||||
"title": title,
|
||||
"link": link,
|
||||
"summary": summary,
|
||||
"published": published,
|
||||
}
|
||||
)
|
||||
return feed_title, entries
|
||||
|
||||
|
||||
def parse_atom(root: ET.Element, atom_ns: str) -> tuple[str, list[dict[str, str]]]:
|
||||
ns = {"a": atom_ns}
|
||||
feed_title = find_text(root, "a:title", ns) or "Feed"
|
||||
entries: list[dict[str, str]] = []
|
||||
|
||||
for entry in root.findall("a:entry", ns):
|
||||
title = find_text(entry, "a:title", ns) or "Untitled"
|
||||
summary = find_text(entry, "a:summary", ns) or find_text(entry, "a:content", ns)
|
||||
published = find_text(entry, "a:updated", ns) or find_text(entry, "a:published", ns)
|
||||
|
||||
link = ""
|
||||
for link_elem in entry.findall("a:link", ns):
|
||||
href = (link_elem.attrib.get("href") or "").strip()
|
||||
rel = (link_elem.attrib.get("rel") or "alternate").strip()
|
||||
if not href:
|
||||
continue
|
||||
if rel == "alternate":
|
||||
link = href
|
||||
break
|
||||
if not link:
|
||||
link = href
|
||||
|
||||
entries.append(
|
||||
{
|
||||
"title": title,
|
||||
"link": link,
|
||||
"summary": summary,
|
||||
"published": published,
|
||||
}
|
||||
)
|
||||
|
||||
return feed_title, entries
|
||||
|
||||
|
||||
def parse_feed(xml_bytes: bytes) -> tuple[str, list[dict[str, str]]]:
|
||||
root = ET.fromstring(xml_bytes)
|
||||
atom_ns = namespace(root.tag)
|
||||
if atom_ns == "http://www.w3.org/2005/Atom":
|
||||
return parse_atom(root, atom_ns)
|
||||
return parse_rss(root)
|
||||
|
||||
|
||||
def truncate(value: str, max_len: int) -> str:
|
||||
if max_len <= 0 or len(value) <= max_len:
|
||||
return value
|
||||
clipped = value[: max_len - 1].rstrip()
|
||||
return f"{clipped}…"
|
||||
|
||||
|
||||
def render_markdown(
|
||||
feed_title: str,
|
||||
entries: list[dict[str, str]],
|
||||
template: str,
|
||||
include_summary: bool,
|
||||
summary_max_len: int,
|
||||
) -> str:
|
||||
lines: list[str] = [f"# {feed_title}", ""]
|
||||
|
||||
if not entries:
|
||||
lines.extend(["No feed items found.", ""])
|
||||
return "\n".join(lines).rstrip() + "\n"
|
||||
|
||||
if template == "short":
|
||||
for item in entries:
|
||||
title = item["title"]
|
||||
link = item["link"]
|
||||
published = item["published"]
|
||||
line = f"- [{title}]({link})" if link else f"- {title}"
|
||||
if published:
|
||||
line += f" ({published})"
|
||||
lines.append(line)
|
||||
lines.append("")
|
||||
return "\n".join(lines)
|
||||
|
||||
for item in entries:
|
||||
title = item["title"]
|
||||
link = item["link"]
|
||||
summary = truncate(item["summary"], summary_max_len)
|
||||
published = item["published"]
|
||||
|
||||
lines.append(f"## [{title}]({link})" if link else f"## {title}")
|
||||
if published:
|
||||
lines.append(f"- Published: {published}")
|
||||
if include_summary and summary:
|
||||
lines.append("")
|
||||
lines.append(summary)
|
||||
lines.append("")
|
||||
|
||||
return "\n".join(lines).rstrip() + "\n"
|
||||
|
||||
|
||||
def build_arg_parser() -> argparse.ArgumentParser:
|
||||
parser = argparse.ArgumentParser(description="Convert RSS/Atom feed URL to Markdown")
|
||||
parser.add_argument("url", help="RSS/Atom feed URL")
|
||||
parser.add_argument("-o", "--output", help="Write Markdown output to a .md file")
|
||||
parser.add_argument("--limit", type=int, default=0, help="Max number of feed items")
|
||||
parser.add_argument("--no-summary", action="store_true", help="Exclude summaries")
|
||||
parser.add_argument(
|
||||
"--summary-max-length",
|
||||
type=int,
|
||||
default=280,
|
||||
help="Max summary length before truncation",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--template",
|
||||
choices=("short", "full"),
|
||||
default="short",
|
||||
help="Output template style",
|
||||
)
|
||||
return parser
|
||||
|
||||
|
||||
def main() -> int:
|
||||
args = build_arg_parser().parse_args()
|
||||
try:
|
||||
feed_url = validate_feed_url(args.url)
|
||||
output_path = validate_output_path(args.output) if args.output else None
|
||||
if args.limit < 0:
|
||||
raise ValueError("--limit must be >= 0")
|
||||
if args.summary_max_length < 0:
|
||||
raise ValueError("--summary-max-length must be >= 0")
|
||||
|
||||
xml_bytes = fetch_xml(feed_url)
|
||||
feed_title, entries = parse_feed(xml_bytes)
|
||||
if args.limit:
|
||||
entries = entries[: args.limit]
|
||||
|
||||
include_summary = (not args.no_summary) and args.template == "full"
|
||||
markdown = render_markdown(
|
||||
feed_title=feed_title,
|
||||
entries=entries,
|
||||
template=args.template,
|
||||
include_summary=include_summary,
|
||||
summary_max_len=args.summary_max_length,
|
||||
)
|
||||
|
||||
if output_path:
|
||||
output_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
output_path.write_text(markdown, encoding="utf-8")
|
||||
else:
|
||||
sys.stdout.write(markdown)
|
||||
return 0
|
||||
except Exception as exc: # noqa: BLE001
|
||||
sys.stderr.write(f"error: {exc}\n")
|
||||
return 1
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
7
skills/seedream-image-gen/.clawhub/origin.json
Normal file
7
skills/seedream-image-gen/.clawhub/origin.json
Normal file
@@ -0,0 +1,7 @@
|
||||
{
|
||||
"version": 1,
|
||||
"registry": "https://clawhub.ai",
|
||||
"slug": "seedream-image-gen",
|
||||
"installedVersion": "1.0.0",
|
||||
"installedAt": 1776838264623
|
||||
}
|
||||
142
skills/seedream-image-gen/SKILL.md
Normal file
142
skills/seedream-image-gen/SKILL.md
Normal file
@@ -0,0 +1,142 @@
|
||||
---
|
||||
name: seedream
|
||||
description: Seedream 图片生成 - 火山引擎方舟大模型服务平台图片生成 API。支持文生图、图生图、多图融合、组图生成等多种模式。
|
||||
homepage: https://www.volcengine.com/docs/82379/1541523
|
||||
metadata:
|
||||
{
|
||||
"openclaw":
|
||||
{
|
||||
"emoji": "🎨",
|
||||
"requires": { "bins": ["python3"], "env": ["ARK_API_KEY"] },
|
||||
"primaryEnv": "ARK_API_KEY",
|
||||
"install":
|
||||
[
|
||||
{
|
||||
"id": "python-brew",
|
||||
"kind": "brew",
|
||||
"formula": "python",
|
||||
"bins": ["python3"],
|
||||
"label": "Install Python (brew)",
|
||||
},
|
||||
],
|
||||
},
|
||||
}
|
||||
---
|
||||
|
||||
# Seedream 图片生成
|
||||
|
||||
基于火山引擎方舟大模型服务平台的 Seedream 图片生成 API。
|
||||
|
||||
## 功能
|
||||
|
||||
- ✅ 文生图 (Text to Image)
|
||||
- ✅ 图生图 (Image to Image) - 单图输入
|
||||
- ✅ 多图融合 (Multi-image to Image) - 多图输入
|
||||
- ✅ 组图生成 (Sequential Image Generation)
|
||||
- ✅ 联网搜索 (Web Search) - 仅 5.0 lite
|
||||
|
||||
## 环境配置
|
||||
|
||||
使用前需要设置环境变量 `ARK_API_KEY`:
|
||||
|
||||
```bash
|
||||
export ARK_API_KEY="your-api-key-here"
|
||||
```
|
||||
|
||||
或在命令中传入:
|
||||
```bash
|
||||
python3 {baseDir}/scripts/seedream.py --api-key "your-api-key" ...
|
||||
```
|
||||
|
||||
## 快速开始
|
||||
|
||||
### 文生图
|
||||
|
||||
```bash
|
||||
python3 {baseDir}/scripts/seedream.py -p "一只可爱的橘猫坐在窗台上"
|
||||
```
|
||||
|
||||
### 图生图
|
||||
|
||||
```bash
|
||||
python3 {baseDir}/scripts/seedream.py -p "将图片转为水彩画风格" -i "https://example.com/input.png"
|
||||
```
|
||||
|
||||
### 多图融合
|
||||
|
||||
```bash
|
||||
python3 {baseDir}/scripts/seedream.py -p "将图1的服装换为图2的服装" -i "img1.png" -i "img2.png"
|
||||
```
|
||||
|
||||
### 组图生成
|
||||
|
||||
```bash
|
||||
python3 {baseDir}/scripts/seedream.py -p "生成4张连贯插画" --sequential --max-images 4
|
||||
```
|
||||
|
||||
## 命令行参数
|
||||
|
||||
| 参数 | 简写 | 说明 | 默认值 |
|
||||
|------|------|------|--------|
|
||||
| --api-key | -k | API Key | 环境变量 ARK_API_KEY |
|
||||
| --prompt | -p | 提示词 | (必填) |
|
||||
| --model | -m | 模型 (5.0-lite/4.5/4.0) | 5.0-lite |
|
||||
| --image | -i | 输入图片 URL (可多次使用) | - |
|
||||
| --size | -s | 图像尺寸 | 2K |
|
||||
| --output-format | -f | 输出格式 (png/jpeg) | png |
|
||||
| --watermark | -w | 添加水印 | False |
|
||||
| --sequential | - | 启用组图模式 | False |
|
||||
| --max-images | - | 最大图片数 | 4 |
|
||||
| --web-search | - | 启用联网搜索 | False |
|
||||
| --output | -o | 输出目录 | ~/Downloads |
|
||||
| --proxy | -x | 代理地址 | - |
|
||||
|
||||
## 支持的模型
|
||||
|
||||
| 模型 | 别名 | 支持功能 |
|
||||
|------|------|----------|
|
||||
| Seedream 5.0 lite | 5.0-lite | 文生图、图生图、组图、联网搜索、png输出 |
|
||||
| Seedream 4.5 | 4.5 | 文生图、图生图、组图、jpeg输出 |
|
||||
| Seedream 4.0 | 4.0 | 文生图、图生图、组图、jpeg输出 |
|
||||
|
||||
## 图像尺寸
|
||||
|
||||
- 方式1: `2K`, `3K`, `4K` (5.0-lite/4.5), `1K`, `2K`, `4K` (4.0)
|
||||
- 方式2: 像素值,如 `2048x2048`, `2848x1600`
|
||||
|
||||
## 示例
|
||||
|
||||
### 生成单张图片
|
||||
|
||||
```bash
|
||||
# 文生图 - 5.0 lite
|
||||
python3 {baseDir}/scripts/seedream.py -p "科技感的城市夜景" -o ./images
|
||||
|
||||
# 图生图
|
||||
python3 {baseDir}/scripts/seedream.py -p "转为黑白风格" -i "input.png" -o ./images
|
||||
|
||||
# 使用 4.5 模型
|
||||
python3 {baseDir}/scripts/seedream.py -p "可爱的柴犬" -m 4.5 -o ./images
|
||||
```
|
||||
|
||||
### 生成组图
|
||||
|
||||
```bash
|
||||
# 文生组图
|
||||
python3 {baseDir}/scripts/seedream.py -p "生成4张连贯的四季风景" --sequential -o ./images
|
||||
|
||||
# 参考图生组图
|
||||
python3 {baseDir}/scripts/seedream.py -p "参考logo生成品牌设计" -i "logo.png" --sequential --max-images 6 -o ./images
|
||||
```
|
||||
|
||||
### 联网搜索
|
||||
|
||||
```bash
|
||||
# 生成实时天气图
|
||||
python3 {baseDir}/scripts/seedream.py -p "北京今日天气预报,现代扁平化风格" --web-search -o ./images
|
||||
```
|
||||
|
||||
## 输出
|
||||
|
||||
- 生成的图片保存在指定输出目录 (默认 ~/Downloads)
|
||||
- 文件名格式: `seedream_{timestamp}_{index}.{ext}`
|
||||
6
skills/seedream-image-gen/_meta.json
Normal file
6
skills/seedream-image-gen/_meta.json
Normal file
@@ -0,0 +1,6 @@
|
||||
{
|
||||
"ownerId": "kn79mbe1m3w4dfs4vywn8fhgq182ng3c",
|
||||
"slug": "seedream-image-gen",
|
||||
"version": "1.0.0",
|
||||
"publishedAt": 1773164590067
|
||||
}
|
||||
34
skills/social-content/SKILL.md
Normal file
34
skills/social-content/SKILL.md
Normal file
@@ -0,0 +1,34 @@
|
||||
---
|
||||
name: social-content
|
||||
description: Generate high-quality content across multiple social media platforms including X/Twitter, LinkedIn, Instagram, Facebook, and TikTok. Use when creating social media posts, thread content, short-form video scripts, or platform-specific adaptations.
|
||||
---
|
||||
|
||||
# Social Content Generator
|
||||
|
||||
## Overview
|
||||
Generate platform-specific content for X/Twitter, LinkedIn, Instagram, Facebook, and TikTok.
|
||||
|
||||
## When to Use
|
||||
- Create social media posts
|
||||
- Thread content for X/Twitter
|
||||
- Platform-specific content adaptation
|
||||
- Short-form video scripts
|
||||
|
||||
## Supported Platforms
|
||||
- X/Twitter (280 chars, threads)
|
||||
- LinkedIn (professional, 200-600 words)
|
||||
- Instagram (caption + hashtags)
|
||||
- Facebook (engagement-focused)
|
||||
- TikTok (short video scripts)
|
||||
|
||||
## Workflow
|
||||
1. Identify target platform(s)
|
||||
2. Apply platform-specific tone/style
|
||||
3. Generate content with hashtags/call-to-action
|
||||
4. Save to drafts as platform-specific file
|
||||
|
||||
## Output Format
|
||||
- Platform tag prefix
|
||||
- Platform-specific length
|
||||
- Relevant hashtags (3-5)
|
||||
- Call-to-action (optional)
|
||||
@@ -0,0 +1,7 @@
|
||||
{
|
||||
"version": 1,
|
||||
"registry": "https://clawhub.ai",
|
||||
"slug": "tavily-web-search-for-openclaw",
|
||||
"installedVersion": "1.0.0",
|
||||
"installedAt": 1776838149303
|
||||
}
|
||||
185
skills/tavily-web-search-for-openclaw/README.md
Normal file
185
skills/tavily-web-search-for-openclaw/README.md
Normal file
@@ -0,0 +1,185 @@
|
||||
# Tavily Web Search Skill for OpenClaw 🦀
|
||||
|
||||
A lightweight Tavily web search skill for OpenClaw that works without `pip` and without third-party Python packages.
|
||||
|
||||
This skill is designed for minimal Linux environments such as:
|
||||
|
||||
- Raspberry Pi
|
||||
- Ubuntu Server
|
||||
- small VPS setups
|
||||
- systems where installing Python packages is unavailable, restricted, or intentionally avoided
|
||||
|
||||
Instead of using the `tavily-python` SDK, this skill calls the Tavily REST API directly using Python's standard library.
|
||||
|
||||
## Features
|
||||
|
||||
- Tavily web search through direct REST API calls
|
||||
- No `pip install` required
|
||||
- No external Python dependencies
|
||||
- Works well on Raspberry Pi and Ubuntu Server
|
||||
- Supports general search and news search
|
||||
- Supports answer summaries, images, and domain filtering
|
||||
- Easy to integrate into OpenClaw skills
|
||||
- Simple secret-file based API key setup
|
||||
|
||||
## Why this version exists
|
||||
|
||||
The official Tavily Python SDK is convenient, but some environments do not have a practical or desirable `pip` workflow.
|
||||
|
||||
This skill exists for setups where you want:
|
||||
|
||||
- a small footprint
|
||||
- no package installation step
|
||||
- predictable deployment
|
||||
- compatibility with minimal server environments
|
||||
- a solution that keeps working even on systems where Python package installation is restricted
|
||||
|
||||
This is especially useful on Raspberry Pi, Ubuntu Server, and other minimal Linux systems where you may prefer to avoid virtual environments, extra package managers, or external Python dependencies for a simple search integration.
|
||||
|
||||
## Folder Structure
|
||||
|
||||
```text
|
||||
skills/tavily/
|
||||
├── SKILL.md
|
||||
├── .secrets/
|
||||
│ └── tavily.key
|
||||
└── scripts/
|
||||
└── tavily_search.py
|
||||
```
|
||||
|
||||
## Secret Setup
|
||||
|
||||
Create the secret directory:
|
||||
|
||||
```bash
|
||||
mkdir -p skills/tavily/.secrets
|
||||
chmod 700 skills/tavily/.secrets
|
||||
```
|
||||
|
||||
Create the key file:
|
||||
|
||||
```bash
|
||||
nano skills/tavily/.secrets/tavily.key
|
||||
```
|
||||
|
||||
The file must contain only your raw Tavily API key:
|
||||
|
||||
```
|
||||
tvly-xxxxxxxxxxxxxxxx
|
||||
```
|
||||
|
||||
Do **not** write:
|
||||
|
||||
```
|
||||
TAVILY_API_KEY=tvly-xxxxxxxxxxxxxxxx
|
||||
```
|
||||
|
||||
Set permissions:
|
||||
|
||||
```bash
|
||||
chmod 600 skills/tavily/.secrets/tavily.key
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
Basic search:
|
||||
|
||||
```bash
|
||||
python3 skills/tavily/scripts/tavily_search.py --query "latest AI news"
|
||||
```
|
||||
|
||||
News-focused search:
|
||||
|
||||
```bash
|
||||
python3 skills/tavily/scripts/tavily_search.py --query "gold prices" --topic news
|
||||
```
|
||||
|
||||
Advanced search:
|
||||
|
||||
```bash
|
||||
python3 skills/tavily/scripts/tavily_search.py --query "raspberry pi ubuntu server optimization" --depth advanced
|
||||
```
|
||||
|
||||
JSON output:
|
||||
|
||||
```bash
|
||||
python3 skills/tavily/scripts/tavily_search.py --query "python asyncio" --json
|
||||
```
|
||||
|
||||
## Supported Options
|
||||
|
||||
| Option | Description |
|
||||
| ------------------- | ---------------------------- |
|
||||
| `--query` | **required** search query |
|
||||
| `--topic` | `general` or `news` |
|
||||
| `--depth` | `basic` or `advanced` |
|
||||
| `--max-results` | number of results |
|
||||
| `--no-answer` | disable answer summary |
|
||||
| `--raw-content` | include parsed raw content |
|
||||
| `--images` | include image results |
|
||||
| `--include-domains` | restrict to selected domains |
|
||||
| `--exclude-domains` | filter out selected domains |
|
||||
| `--json` | output raw JSON |
|
||||
|
||||
## OpenClaw Integration
|
||||
|
||||
This skill is meant to be used from OpenClaw through `SKILL.md`.
|
||||
|
||||
Typical usage flow:
|
||||
|
||||
1. The user asks for web search or recent information
|
||||
2. OpenClaw invokes the Tavily skill
|
||||
3. The skill runs `scripts/tavily_search.py`
|
||||
4. The script reads the API key from `.secrets/tavily.key`
|
||||
5. Results are returned in a format suitable for summarization
|
||||
|
||||
## Why no pip is required
|
||||
|
||||
This project intentionally avoids the Tavily Python SDK and other third-party dependencies.
|
||||
|
||||
That means:
|
||||
|
||||
- there is no `pip install` step
|
||||
- there is no dependency on `tavily-python`
|
||||
- there is no virtual environment requirement just to use the skill
|
||||
- deployment stays simple on minimal systems
|
||||
|
||||
The script uses only Python's standard library to call the Tavily REST API directly.
|
||||
|
||||
## Security Notes
|
||||
|
||||
- The `.secrets` directory should never be committed
|
||||
- Your API key should stay only on the target machine
|
||||
- This repository should contain code and documentation only
|
||||
- Add `.secrets/` to `.gitignore`
|
||||
- Keep `tavily.key` readable only by the user or service that runs the skill
|
||||
|
||||
Example `.gitignore` entries:
|
||||
|
||||
```
|
||||
.secrets/
|
||||
__pycache__/
|
||||
*.pyc
|
||||
```
|
||||
|
||||
## Requirements
|
||||
|
||||
- Python 3
|
||||
- Network access to Tavily API
|
||||
- A valid Tavily API key
|
||||
- No additional Python packages are required
|
||||
|
||||
## Motivation
|
||||
|
||||
This project is especially useful for:
|
||||
|
||||
- Raspberry Pi home server setups
|
||||
- Ubuntu Server deployments
|
||||
- minimal VPS environments
|
||||
- offline-managed or tightly controlled systems
|
||||
- users who want Tavily search without SDK installation
|
||||
- environments where `pip` is unavailable, restricted, or intentionally avoided
|
||||
|
||||
## License
|
||||
|
||||
This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.
|
||||
25
skills/tavily-web-search-for-openclaw/SKILL.md
Normal file
25
skills/tavily-web-search-for-openclaw/SKILL.md
Normal file
@@ -0,0 +1,25 @@
|
||||
---
|
||||
name: tavily
|
||||
description: use this when the user asks to search the web, look up recent information, check current events, gather online sources, or research a topic using tavily search.
|
||||
---
|
||||
|
||||
# Tavily Search
|
||||
|
||||
Use this skill for web search and lightweight research through the Tavily Search API.
|
||||
|
||||
## Requirements
|
||||
|
||||
A valid Tavily API key must be available through one of these methods:
|
||||
|
||||
1. `--api-key`
|
||||
2. `TAVILY_API_KEY`
|
||||
3. `{baseDir}/.secrets/tavily.key`
|
||||
|
||||
If no key is available, explain that Tavily search is not configured in this environment.
|
||||
|
||||
## Command
|
||||
|
||||
Run:
|
||||
|
||||
```bash
|
||||
python3 {baseDir}/scripts/tavily_search.py --query "<user query>"
|
||||
6
skills/tavily-web-search-for-openclaw/_meta.json
Normal file
6
skills/tavily-web-search-for-openclaw/_meta.json
Normal file
@@ -0,0 +1,6 @@
|
||||
{
|
||||
"ownerId": "kn7dnrfjd81n2c2x98wy693sbs82nfgp",
|
||||
"slug": "tavily-web-search-for-openclaw",
|
||||
"version": "1.0.0",
|
||||
"publishedAt": 1773269610000
|
||||
}
|
||||
241
skills/tavily-web-search-for-openclaw/scripts/tavily_search.py
Normal file
241
skills/tavily-web-search-for-openclaw/scripts/tavily_search.py
Normal file
@@ -0,0 +1,241 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
import urllib.error
|
||||
import urllib.request
|
||||
|
||||
|
||||
API_URL = "https://api.tavily.com/search"
|
||||
|
||||
|
||||
def load_api_key():
|
||||
base_dir = os.path.dirname(os.path.abspath(__file__))
|
||||
key_path = os.path.normpath(
|
||||
os.path.join(base_dir, "..", ".secrets", "tavily.key")
|
||||
)
|
||||
|
||||
try:
|
||||
with open(key_path, "r", encoding="utf-8") as f:
|
||||
raw = f.read().strip()
|
||||
|
||||
if "=" in raw:
|
||||
left, right = raw.split("=", 1)
|
||||
if left.strip() == "TAVILY_API_KEY":
|
||||
return right.strip()
|
||||
|
||||
return raw or None
|
||||
except FileNotFoundError:
|
||||
return None
|
||||
|
||||
|
||||
def clamp_max_results(value: int) -> int:
|
||||
if value < 1:
|
||||
return 1
|
||||
if value > 10:
|
||||
return 10
|
||||
return value
|
||||
|
||||
|
||||
def build_payload(args: argparse.Namespace, api_key: str) -> dict:
|
||||
payload = {
|
||||
"api_key": api_key,
|
||||
"query": args.query,
|
||||
"search_depth": args.depth,
|
||||
"topic": args.topic,
|
||||
"max_results": clamp_max_results(args.max_results),
|
||||
"include_answer": not args.no_answer,
|
||||
"include_raw_content": args.raw_content,
|
||||
"include_images": args.images,
|
||||
}
|
||||
|
||||
if args.include_domains:
|
||||
payload["include_domains"] = args.include_domains
|
||||
|
||||
if args.exclude_domains:
|
||||
payload["exclude_domains"] = args.exclude_domains
|
||||
|
||||
return payload
|
||||
|
||||
|
||||
def tavily_search(payload: dict, timeout: int = 30) -> dict:
|
||||
data = json.dumps(payload).encode("utf-8")
|
||||
req = urllib.request.Request(
|
||||
API_URL,
|
||||
data=data,
|
||||
headers={
|
||||
"Content-Type": "application/json",
|
||||
"Accept": "application/json",
|
||||
},
|
||||
method="POST",
|
||||
)
|
||||
|
||||
try:
|
||||
with urllib.request.urlopen(req, timeout=timeout) as resp:
|
||||
body = resp.read().decode("utf-8", errors="replace")
|
||||
return json.loads(body)
|
||||
except urllib.error.HTTPError as exc:
|
||||
details = ""
|
||||
try:
|
||||
details = exc.read().decode("utf-8", errors="replace")
|
||||
except Exception:
|
||||
details = ""
|
||||
return {
|
||||
"success": False,
|
||||
"error": f"HTTP {exc.code}",
|
||||
"details": details,
|
||||
}
|
||||
except urllib.error.URLError as exc:
|
||||
return {
|
||||
"success": False,
|
||||
"error": "Network error",
|
||||
"details": str(exc.reason),
|
||||
}
|
||||
except Exception as exc:
|
||||
return {
|
||||
"success": False,
|
||||
"error": "Unexpected error",
|
||||
"details": str(exc),
|
||||
}
|
||||
|
||||
|
||||
def print_human(result: dict) -> int:
|
||||
if not isinstance(result, dict):
|
||||
print("Error: invalid response format", file=sys.stderr)
|
||||
return 1
|
||||
|
||||
if "error" in result and not result.get("results"):
|
||||
print(f"Error: {result.get('error')}", file=sys.stderr)
|
||||
if result.get("details"):
|
||||
print(result["details"], file=sys.stderr)
|
||||
return 1
|
||||
|
||||
print(f"Query: {result.get('query', 'N/A')}")
|
||||
print(f"Response time: {result.get('response_time', 'N/A')}")
|
||||
usage = result.get("usage", {})
|
||||
if isinstance(usage, dict):
|
||||
print(f"Credits used: {usage.get('credits', 'N/A')}")
|
||||
print()
|
||||
|
||||
answer = result.get("answer")
|
||||
if answer:
|
||||
print("=== ANSWER ===")
|
||||
print(answer)
|
||||
print()
|
||||
|
||||
results = result.get("results", [])
|
||||
if results:
|
||||
print("=== RESULTS ===")
|
||||
for index, item in enumerate(results, start=1):
|
||||
title = item.get("title") or "No title"
|
||||
url = item.get("url") or "N/A"
|
||||
score = item.get("score")
|
||||
content = item.get("content") or ""
|
||||
|
||||
print(f"\n{index}. {title}")
|
||||
print(f" URL: {url}")
|
||||
|
||||
if isinstance(score, (int, float)):
|
||||
print(f" Score: {score:.3f}")
|
||||
|
||||
if content:
|
||||
snippet = content[:280].replace("\n", " ").strip()
|
||||
if len(content) > 280:
|
||||
snippet += "..."
|
||||
print(f" {snippet}")
|
||||
|
||||
images = result.get("images", [])
|
||||
if images:
|
||||
print(f"\n=== IMAGES ({len(images)}) ===")
|
||||
for image in images[:5]:
|
||||
if isinstance(image, dict):
|
||||
print(f" {image.get('url', 'N/A')}")
|
||||
else:
|
||||
print(f" {image}")
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
def main() -> int:
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Tavily Search via direct REST API call",
|
||||
)
|
||||
|
||||
parser.add_argument("--query", required=True, help="Search query")
|
||||
parser.add_argument(
|
||||
"--api-key",
|
||||
help="Tavily API key. If omitted, file or TAVILY_API_KEY is used.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--depth",
|
||||
choices=["basic", "advanced"],
|
||||
default="basic",
|
||||
help="Search depth",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--topic",
|
||||
choices=["general", "news"],
|
||||
default="general",
|
||||
help="Search topic",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--max-results",
|
||||
type=int,
|
||||
default=5,
|
||||
help="Number of results to return (1-10)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--no-answer",
|
||||
action="store_true",
|
||||
help="Do not request Tavily answer summary",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--raw-content",
|
||||
action="store_true",
|
||||
help="Include parsed raw content",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--images",
|
||||
action="store_true",
|
||||
help="Include image results",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--include-domains",
|
||||
nargs="+",
|
||||
help="Only include these domains",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--exclude-domains",
|
||||
nargs="+",
|
||||
help="Exclude these domains",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--json",
|
||||
action="store_true",
|
||||
help="Print raw JSON response",
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
api_key = load_api_key()
|
||||
if not api_key:
|
||||
print(
|
||||
"Error: Tavily API key not found in ../.secrets/tavily.key",
|
||||
file=sys.stderr,
|
||||
)
|
||||
return 1
|
||||
|
||||
payload = build_payload(args, api_key)
|
||||
result = tavily_search(payload)
|
||||
|
||||
if args.json:
|
||||
print(json.dumps(result, indent=2, ensure_ascii=False))
|
||||
return 0 if "error" not in result else 1
|
||||
|
||||
return print_human(result)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
Reference in New Issue
Block a user