Use this file to discover all available pages before exploring further.
Skills are reusable agent capabilities that provide specialized workflows and domain knowledge.You can use Agent Skills to provide your deep agent with new capabilities and expertise. For ready-to-use skills that improve your agent’s performance on LangChain ecosystem tasks, see the LangChain Skills repository.Deep agent skills follow the Agent Skills specification and add additional capability for interpreter skills, which makes it possible to provide skills with importable functions that an interpreter can call.
Skills are a directory of folders, where each folder has one or more files that contain context the agent can use:
A SKILL.md file containing instructions and metadata about the skill
Additional scripts (optional)
Additional reference info, such as docs (optional)
Additional assets, such as templates and other resources (optional)
Any additional assets (scripts, docs, templates, or other resources) must be referenced in the SKILL.md file with information on what the file contains and how to use it so the agent can decide when to use them.
When you create a deep agent, you can pass in a list of directories containing skills. As the agent starts, it reads through the frontmatter of each SKILL.md file.When the agent receives a prompt, the agent checks if it can use any skills while fulfilling the prompt. If it finds a matching prompt, it then reviews the rest of the skill files. This pattern of only reviewing the skill information when needed is called progressive disclosure.
You might have a skills folder that contains a skill to use a docs site in a certain way, as well as another skill to search the arXiv preprint repository of research papers:
The SKILL.md file always follows the same pattern, starting with metadata in the frontmatter and followed by the instructions for the skill.The following example shows a skill that gives instructions on how to provide relevant langgraph docs when prompted:
---name: langgraph-docsdescription: Use this skill for requests related to LangGraph in order to fetch relevant documentation to provide accurate, up-to-date guidance.module: index.ts---# langgraph-docs## OverviewThis skill explains how to access LangGraph Python documentation to help answer questions and guide implementation.## Instructions### 1. Fetch the Documentation IndexUse the fetch_url tool to read the following URL:https://docs.langchain.com/llms.txtThis provides a structured list of all available documentation with descriptions.### 2. Select Relevant DocumentationBased on the question, identify 2-4 most relevant documentation URLs from the index. Prioritize:- Specific how-to guides for implementation questions- Core concept pages for understanding questions- Tutorials for end-to-end examples- Reference docs for API details### 3. Fetch Selected DocumentationUse the fetch_url tool to read the selected documentation URLs.### 4. Provide accurate guidanceAfter reading the documentation, answer the user's question using the relevant LangGraph docs you fetched.In your response:- Give a direct answer first.- Include the minimum necessary context and any key steps or API names.- Avoid quoting long passages. Paraphrase and link instead.### 5. Provide the regular links for the used referencesAt the end of your response, include a **References** section listing the page URLs you used.`llms.txt` uses Markdown link targets that typically end in `.md`. Use the helper from this skill module to resolve those into the actual page URLs before listing them as references.```typescriptconst { resolveLlmsUrl } = await import("@/skills/langgraph-docs");// llms.txt uses Markdown link targets that typically end in `.md`.// Convert those into the actual page URLs before fetching.const llmsUrls = [ "https://docs.langchain.com/oss/langgraph/concepts.md", "https://docs.langchain.com/oss/langgraph/concepts.md", "https://docs.langchain.com/oss/langgraph/tutorials.md",];const pageUrls = [...new Set(llmsUrls.map(resolveLlmsUrl))];pageUrls;```
The referenced helper code would be placed in index.ts:
The following example shows a SKILL.md file using all available frontmatter fields:
---name: langgraph-docsdescription: Use this skill for requests related to LangGraph in order to fetch relevant documentation to provide accurate, up-to-date guidance.license: MITcompatibility: Requires internet access for fetching documentation URLsmetadata: author: langchain version: "1.0"allowed-tools: fetch_urlmodule: index.ts---# langgraph-docs## OverviewThis skill explains how to access LangGraph Python documentation to help answer questions and guide implementation.## Instructions### 1. Fetch the documentation indexUse the fetch_url tool to read the following URL:https://docs.langchain.com/llms.txtThis provides a structured list of all available documentation with descriptions.### 2. Select relevant documentationBased on the question, identify 2-4 most relevant documentation URLs from the index. Prioritize:- Specific how-to guides for implementation questions- Core concept pages for understanding questions- Tutorials for end-to-end examples- Reference docs for API details### 3. Fetch selected documentationUse the fetch_url tool to read the selected documentation URLs.### 4. Provide accurate guidanceAfter reading the documentation, answer the user's question using the relevant LangGraph docs you fetched.In your response:- Give a direct answer first.- Include the minimum necessary context and any key steps or API names.- Avoid quoting long passages. Paraphrase and link instead.### 5. Provide the regular links for the used referencesAt the end of your response, include a **References** section listing the page URLs you used.`llms.txt` uses Markdown link targets that typically end in `.md`. Use the helper from this skill module to resolve those into the actual page URLs before listing them as references.```typescriptconst { resolveLlmsUrl } = await import("@/skills/langgraph-docs");// llms.txt uses Markdown link targets that typically end in `.md`.// Convert those into the actual page URLs before fetching.const llmsUrls = [ "https://docs.langchain.com/oss/langgraph/concepts.md", "https://docs.langchain.com/oss/langgraph/concepts.md", "https://docs.langchain.com/oss/langgraph/tutorials.md",];const pageUrls = [...new Set(llmsUrls.map(resolveLlmsUrl))];pageUrls;```
List of skill source paths.Paths must be specified using forward slashes and are relative to the backend’s root.
If omitted, no skills are loaded.
When using StateBackend (default), provide skill files with invoke(files={...}). Use create_file_data() from deepagents.backends.utils to format file contents; raw strings are not supported.
With FilesystemBackend, skills are loaded from disk relative to the backend’s root_dir.
Later sources override earlier ones for skills with the same name (last one wins).
The SDK only loads the sources you pass in skills. It does not automatically scan CLI directories such as ~/.deepagents/... or ~/.agents/....For CLI storage conventions, see App data.
Emulating CLI source order in SDK
If you want CLI-style layering in SDK code, pass all desired sources explicitly in lowest-to-highest precedence order:
When multiple skill sources contain a skill with the same name, the skill from the source listed later in the skills array takes precedence (last one wins). This lets you layer skills from different origins.
# If both sources contain a skill named "web-search",# the one from "/skills/project/" wins (loaded last).agent = create_deep_agent( model="google_genai:gemini-3.1-pro-preview", skills=["/skills/user/", "/skills/project/"], ...)
When you use subagents, you can configure which skills each type has access to:
General-purpose subagent: Automatically inherits skills from the main agent when you pass skills to create_deep_agent. No additional configuration is needed.
Custom subagents: Do not inherit the main agent’s skills. Add a skills parameter to each subagent definition with that subagent’s skill source paths.
Skill state is fully isolated: the main agent’s skills are not visible to subagents, and subagent skills are not visible to the main agent.
from deepagents import create_deep_agentresearch_subagent = { "name": "researcher", "description": "Research assistant with specialized skills", "system_prompt": "You are a researcher.", "tools": [web_search], "skills": ["/skills/research/", "/skills/web-search/"], # Subagent-specific skills}agent = create_deep_agent( model="google_genai:gemini-3.1-pro-preview", skills=["/skills/main/"], # Main agent and GP subagent get these subagents=[research_subagent], # Researcher gets only its own skills)
For more information on subagent configuration and skills inheritance, see Subagents.
When skills are configured, a “Skills System” section is injected into the agent’s system prompt. The agent uses this information to follow a three-step process:
Match—When a user prompt arrives, the agent checks whether any skill’s description matches the task.
Read—If a skill applies, the agent reads the full SKILL.md file using the path shown in its skills list.
Execute—The agent follows the skill’s instructions and accesses any supporting files (scripts, templates, reference docs) as needed.
Write clear, specific descriptions in your SKILL.md frontmatter. The agent decides whether to use a skill based on the description alone—detailed descriptions lead to better skill matching.
Interpreter skills are skills that expose code modules to an interpreter. Regular skills give the agent instructions and context. Interpreter skills also give the agent importable functions it can call from interpreter code.This lets you package domain-specific logic once and make it available as a deterministic building block inside the agent’s workspace. Instead of asking the model to re-create a parser, scorer, normalizer, validator, or aggregation routine from scratch, the agent can import a tested helper and compose it with tools, subagents, and runtime state.Use interpreter skills for code that should be:
Reusable across prompts, agents, or projects.
Deterministic enough that you want the same behavior every time.
Too detailed to keep in the model context as instructions.
Useful inside larger workflows, such as scoring search results, normalizing API responses, validating records, grouping rows, or converting data into a report-ready shape.
To make a skill importable:
1
Add a module entry
Add a module key to the skill’s SKILL.md frontmatter. The value is a JavaScript or TypeScript file path relative to the skill directory.
2
Configure skills normally
Pass the skill source path with the skills argument when creating the agent.
3
Use the same backend
Configure the interpreter middleware with the same backend that SkillsMiddleware uses to load skill files.
4
Import from interpreter code
The agent imports the helper module with await import("@/skills/<name>").
---name: order-helpersdescription: Helper functions for normalizing and grouping order records.module: index.ts---# order-helpersUse this skill when order records need deterministic cleanup or aggregation.Import these utilities into the REPL in order to interact with order data:```typescriptconst { groupByStatus } = await import("@/skills/order-helpers");groupByStatus(...);```
Skills can include scripts alongside the SKILL.md file, such as, for example, a Python file that performs a search or data transformation. The agent can read these scripts from any backend, but to execute them, the agent needs access to a shell — which only sandbox backends provide.When you use a CompositeBackend that routes skills to a StoreBackend for persistence while using a sandbox as the default backend, skill files live in the store rather than in the sandbox is where code runs. For sandboxes to be able to use the scripts, you must use custom middleware to upload skill scripts into the sandbox before the agent starts:
import asynciofrom pathlib import Pathfrom typing import Anyfrom daytona import Daytonafrom deepagents import create_deep_agentfrom deepagents.backends import CompositeBackend, StoreBackendfrom deepagents.backends.utils import create_file_datafrom langchain.agents.middleware import AgentMiddleware, AgentStatefrom langchain_daytona import DaytonaSandboxfrom langgraph.runtime import Runtimefrom langgraph.store.memory import InMemoryStore# Identical skill bundles for every user: one shared store namespace.SKILLS_SHARED_NAMESPACE = ("skills", "builtin")class SkillSandboxSyncMiddleware(AgentMiddleware[AgentState, Any, Any]): """Copy shared skill files from the store into the sandbox before each agent run.""" def __init__(self, backend: CompositeBackend) -> None: super().__init__() self.backend = backend async def abefore_agent(self, state: AgentState, runtime: Runtime[Any]) -> None: store = runtime.store files: list[tuple[str, bytes]] = [] for item in await store.asearch(SKILLS_SHARED_NAMESPACE): key = str(item.key) if ".." in key or any(c in key for c in ("*", "?")): msg = f"Invalid key: {key}" raise ValueError(msg) normalized = key if key.startswith("/") else f"/{key}" # CompositeBackend routes paths and batches uploads to the right backend. files.append((f"/skills{normalized}", item.value["content"].encode())) if files: await self.backend.aupload_files(files)async def seed_skill_store(store: InMemoryStore) -> None: """Load canonical skill files from disk into the shared store namespace (run once at deploy). You can retrieve skills from any source (local filesystem, remote URL, etc.). """ skills_dir = Path(__file__).resolve().parent / "skills" for file_path in sorted(p for p in skills_dir.rglob("*") if p.is_file()): rel = file_path.relative_to(skills_dir).as_posix() key = f"/{rel}" await store.aput( SKILLS_SHARED_NAMESPACE, key, create_file_data(file_path.read_text(encoding="utf-8")), )async def main() -> None: store = InMemoryStore() await seed_skill_store(store) daytona = Daytona() sandbox = daytona.create() sandbox_backend = DaytonaSandbox(sandbox=sandbox) backend = CompositeBackend( default=sandbox_backend, routes={ "/skills/": StoreBackend( store=store, namespace=lambda _rt: SKILLS_SHARED_NAMESPACE, ), }, ) try: agent = create_deep_agent( model="google_genai:gemini-3.1-pro-preview", backend=backend, skills=["/skills/"], store=store, middleware=[SkillSandboxSyncMiddleware(backend)], ) finally: sandbox.stop()if __name__ == "__main__": asyncio.run(main())
The middleware’s before_agent hook runs before each agent invocation, reading skill files from that shared namespace and uploading them into the sandbox filesystem. Once synced, the agent can execute scripts with the execute tool just like any other file in the sandbox.For a more complete example that also syncs memories bidirectionally, see syncing skills and memories with custom middleware.