Agent Skills as OCI Artifacts
I'm proposing a specification for packaging and distributing Agent Skills as OCI Artifacts for skills discovery, full portability, provenance, integrity, and enterprise features.
Anthropic published the Agent Skills open specification in December 2025, and the community response has been remarkable. Platform support followed quickly, shared Skills repositories started popping up on GitHub, and vendors began integrating Skills management into their offerings. That's what open specifications do. They unlock ecosystems.
Agent Skills are folders of instructions, scripts, and resources that agents can discover and use to do things more accurately and efficiently.
There's one thing the Specification doesn't cover, though: how do you actually distribute and manage Skills beyond your own project? That's where things get complicated.
The Missing Piece in the Skills Specification
Adding a Skill to My Project
Git seems like the natural choice for distributing Skills. They're just files in a well-defined folder structure, and there are already plenty of GitHub repositories collecting them. In theory, I can check out a repository and copy the Skill I need into my project. Easy, right? Hold on.
- I would like to add a Skill to my project to instruct the agent on how to define good unit tests with JUnit. Before writing my own, let me check if someone has already defined a good one for the same purpose. How do I discover Skills? The Specification doesn't say anything about discovery, and every platform has come up with its own custom registry, none of them interoperable.
- I found a few candidates across different GitHub repositories. Now I need to clone each repo just to browse what's inside. Can't I browse available Skills without pulling down entire repositories first? Git does support sparse and shallow checkouts, but most tools today still require a full clone, which can be very resource-demanding for large repositories. To reduce this friction, several CLIs have emerged that let you install a specific Skill directly from a source Git repository, but they each expect your repository to follow their own conventions.
- I picked one of those CLIs and used it to install the Skill. But its conventions conflict with what another tool expects, and neither approach is part of the Specification. Why isn't there a standard way to structure and consume a Skills repository? For each registry or catalog I want my project to be compatible with, I have to figure out its specific requirements and onboard separately. That doesn't scale.
- The Skill is now under
.agents/skills. But my company doesn't use GitHub. We run an internal Forgejo instance. Do I need to mirror public repositories internally and build custom tooling just to fetch a Skill? Setting up and maintaining an internal mirror, keeping it in sync, and potentially developing custom integrations is a significant investment that most teams can't afford. - Now that I have the Skill in my project, how can I trace it back to its source? How do I check if there's an update? What if a vulnerability is found in one of its scripts? Can I even prove the integrity of the files I copied? The Specification doesn't cover provenance tracking at all. Some tools introduced manifest and lock files to record the source repository and commit digest, but there's no standard, tools like Renovate or Dependabot don't support these vendor-specific formats, and a commit digest is far from the non-falsifiable provenance attestation we'd expect from a mature software supply chain.
Sharing a Skill With Others
One of the main goals of Skills is to "capture organisational knowledge in portable, version-controlled packages". Publishing a Skill to a GitHub repository sounds like a straightforward way to share it. But the moment someone else tries to consume it, or a vendor tries to add enterprise-grade guarantees around it, the cracks start to show.
- I've written a useful Skill and published it on GitHub. How do others discover it? Do I need to register it separately with every Skills catalog or registry out there? Each platform collects and indexes Skills differently, so reaching a broad audience means repeating the onboarding process for every catalog. That doesn't scale.
- A colleague wants to use my Skill but their tool expects a different repository structure than mine. There's no standard, so every platform does it differently. My Skill isn't truly portable, and consumers are locked into whichever toolchain they happened to adopt first.
- An enterprise customer wants to consume my Skill from their internal Git server, not GitHub. Existing tooling barely supports anything beyond GitHub, meaning companies either force employees to create GitHub accounts to work around API rate limiting, or invest in mirroring infrastructure and custom tooling, neither of which is a reasonable expectation.
- How can my consumers trust that the Skill they installed is actually mine and hasn't been tampered with? Git keeps an audit trail, but that doesn't guarantee authenticity or integrity. I could require signed commits, but that still doesn't give consumers a portable, auditable way to verify provenance once the files have been copied into their project.
- A vendor wants to ship a hardened, security-scanned, SBOM-equipped version of my Skill, just like they do for container images. But without a standard packaging format, there's nothing to attach that metadata to, no standard way to track CVEs, and no concept of a releasable artifact. The only option is to copy Skill folders into a vendor-curated repository and invent yet another proprietary format, which fragments the ecosystem further and forces vendors to solve the packaging problem themselves before they can even begin adding enterprise value.
All of these problems share a common root: Skills have no standard packaging and distribution format. Git was never designed to be a package registry. The good news is that we already have a battle-tested, widely adopted standard for packaging and distributing software artifacts: OCI (Open Container Initiative), under the Linux Foundation.
What If Skills Were OCI Artifacts?
OCI registries are not just for container images. The ecosystem figured this out a while ago: Helm charts for Kubernetes, WebAssembly components, FluxCD configuration manifests, Open Policy Agent bundles, and many other artifact types are already packaged and distributed as OCI artifacts today. The infrastructure is there, the tooling is mature, and crucially, you don't need a container runtime to use any of it. Tools like ORAS (OCI Registry As Storage) exist precisely to push and pull arbitrary OCI artifacts without Docker or any other container runtime in the picture. OCI is just a standard for content-addressable, versioned, signed artifact storage and distribution. And that's exactly what Skills need.
The Proposal
I'm proposing a specification for packaging and distributing Agent Skills as OCI Artifacts. Each Skill is packaged as an OCI artifact and published to any standard OCI registry, whether that's GitHub Container Registry, Docker Hub, or your company's internal Harbor or Zot instance. And because Skills are just files (instructions, scripts, resources) this solution is completely independent from the language or tech stack of the project consuming them. A Java project, a Python service, a Go CLI, a polyglot monorepo. It doesn't matter. OCI is the packaging layer, and it works everywhere. Here's what that changes in practice:
- Discovery becomes a first-class concern. Skills can be grouped into Collections, OCI artifacts that reference a set of individual Skill artifacts. Publish a Collection, and consumers can browse and install Skills from it by name, without cloning anything or knowing the full OCI reference of each Skill.
- Installation works like any other package manager. Running
arconia skills add --ref ghcr.io/thomasvitale/agent-skills/manage-pull-requests(example of CLI implementing this Specification) adds the Skill to a declarativeskills.jsonand askills.lock.json, the same model aspackage.jsonandpackage-lock.jsonin npm. Anyone cloning your project can runarconia skills installto reproduce the exact same set of Skills, resolved to their exact digest. The agent itself could do that at the beginning of the session. - Any OCI registry works, including the one you already have. GitHub, GitLab, Zot, Harbor, an on-premises registry. If it speaks OCI, it works. No GitHub account required, no mirroring infrastructure, no custom tooling per platform. And if your company has already invested in an enterprise OCI registry, there's nothing new to set up. You can start consuming public Skills from upstream registries immediately, and proxy or mirror them internally using the same registry workflows your team already uses for container images.
- No vendor lock-in, for anyone. There is one API to implement: the OCI distribution spec. Any compliant registry works out of the box, which means tooling authors don't need to build separate integrations for GitHub, GitLab, Bitbucket, or Forgejo. Vendors building Skills management features don't need to support specific platforms. And teams adopting this specification are free to switch registries without changing their workflows.
- Provenance and integrity are solved by the ecosystem, not by us. Because Skills are OCI artifacts, you can sign them with Sigstore's Cosign, attach SLSA provenance attestations, embed SBOMs, and include vulnerability scan results. All using tools that already exist and are already trusted. Consumers can verify signatures before installing. Existing security scanners can inspect the artifacts. CVEs can be tracked against specific artifact digests.
- Vendors can add enterprise value. A vendor can publish a hardened, evaluated, signed version of a Skill as a proper OCI artifact: with attestations, quality assurance reports, and compatibility metadata attached. No need to invent a proprietary format or copy folders between repositories. Any OCI-compliant tool can read and verify the metadata.
A Closer Look at the Specification
The specification defines two core artifact types, both built on standard OCI primitives.
A Skill artifact is an OCI Image Manifest with a dedicated artifactType of application/vnd.agent-skills.skill.v1 and a config.mediaType of application/vnd.agent-skills.skill.config.v1+json. The Skill folder contents (the SKILL.md, scripts, and any additional resources) follow the existing Agent Skills specification and are packaged as the artifact's content. This means every Skill gets a content-addressable digest, can be tagged with a semantic version, and can be pushed to and pulled from any OCI-compliant registry.
A Collection is an OCI Image Index, the same construct used for multi-platform container images, that references a set of individual Skill artifacts by their digest. Publishing a Collection is how you group and share a curated set of Skills under a single, discoverable reference. Consumers can browse a Collection and install Skills from it by name, without needing to know the full OCI reference of each individual Skill.
On the consumer side, the specification defines two manifest files that work together much like package.json and package-lock.json in npm. The skills.json is a declarative manifest where you list the Skills your project depends on, either by full OCI reference or by short name if a Collection is registered. The skills.lock.json records the resolved digest of each Skill at install time, ensuring that anyone working on the same project pulls the exact same artifact, not just the same tag, which could be mutated, but the same immutable content-addressed layer. Running arconia skills install from a clean checkout always restores the exact set of Skills your project declared.
Finally, because Skills are standard OCI artifacts, any tool that speaks OCI can attach metadata to them via the Referrers API. This goes well beyond supply chain security. Yes, you can attach Cosign signatures, SLSA provenance attestations, SBOMs, and vulnerability scan results, and any OCI-compliant scanner will pick them up automatically. But the same mechanism opens the door to a richer ecosystem: vendors and the community can attach evaluation reports that attest to how a Skill performs when tested with specific agent frameworks, models, or task types. Think compatibility matrices, benchmark results, or quality scores, all stored as first-class OCI artifacts linked to the specific Skill digest they describe, queryable by any standard tool.
Next Steps
The full specification is available on GitHub, including the complete artifact format, manifest schema, and Collection structure. If you've run into the same challenges managing Skills, or if you think OCI is the right foundation to solve them, I'd love to hear from you. Feedback and contributions are welcome directly on the project.
I've also submitted this as a formal proposal for inclusion in the core Agent Skills Specification. If you'd like to see this become a standard, that's the place to make your voice heard.
To see what consuming and publishing Skills looks like in practice, I've built support for this specification into the Arconia CLI as a reference implementation, built on the ORAS Java Client. The repository includes a working example of a Skills Collection published according to the spec, and instructions for trying it out.
For a different implementation, check out the skills-oci CLI built on the ORAS Go Client by my friend Mauricio Salatino, who I thank for providing feedback and inputs to shape this Specification and article.
If you prefer a lower-level approach, the same workflows are available using the ORAS CLI directly.
The specification is open. Any tool can implement it.
Cover picture from Pexels.