Picture this: AI giants are secretly swapping secrets to supercharge their tools, and it could revolutionize how we interact with intelligent machines – but is it innovation or just clever borrowing?
Hey there, fellow tech enthusiasts! I'm Matthias, co-founder and publisher of THE DECODER, where we dive deep into how artificial intelligence is reshaping the bond between humans and computers. Today, let's unpack a fascinating discovery that's got the AI community buzzing. OpenAI seems to have quietly embraced a modular skills framework pioneered by rival Anthropic, all to enhance the capabilities of their AI agents. But here's where it gets controversial: Is this a brilliant strategic move or a sneaky way of copying ideas? Stick around – this is the part most people miss, and it might just change how you view AI development.
Back in October 2025, Anthropic unveiled their innovative skills system for their Claude assistant. This framework allows the AI to automatically select and apply specialized prompts tailored for specific tasks, making it easier for the model to tackle complex, real-world challenges without getting overwhelmed. Think of it as giving your AI a toolbox of mini-experts – each one honed for a particular job, like handling spreadsheets or analyzing PDFs. For beginners, imagine you're asking an AI to summarize a financial report: instead of the whole system fumbling through everything, the skills framework lets it pull in a dedicated 'finance expert' module to extract and process the data efficiently. It's like breaking down a big puzzle into smaller, manageable pieces that fit together perfectly.
Now, fast-forward to a recent find shared by sharp-eyed user Elias Judin on X (formerly Twitter). While tinkering with OpenAI's offerings, Judin stumbled upon evidence that OpenAI is incorporating a similar setup into their Codex CLI tool and ChatGPT. He spotted folders labeled things like 'pdfs' and 'spreadsheets,' each housing 'skill.md' files. These Markdown-based files aren't just random notes – they're detailed guides that instruct the AI on how to handle documents and data with pinpoint accuracy. For example, if you're dealing with a PDF full of text, the skill might guide the AI to extract information step by step, much like calling upon a specialized sub-prompt to solve a tricky subtask within the larger goal. And get this: since it's all organized in simple folders with Markdown files and perhaps some scripts, it's incredibly adaptable. You could tweak it for your needs, adding new skills or refining existing ones without major overhauls.
This discovery points to OpenAI organizing their AI tools into app-like modules, each crafted for targeted tasks. Judin, who encountered this while experimenting with a '5.2 pro' model, has even documented his findings on GitHub, providing a transparent peek into the inner workings. It's reminiscent of how apps on your phone are built for specific functions – one for photos, another for messaging – but here, it's all about empowering AI agents to perform smarter and more efficiently.
To support our mission of delivering independent, free-access reporting on AI's transformative impact, consider chipping in with a donation. Your contributions ensure we keep exploring these cutting-edge topics without compromise. You can do so via bank transfer – every bit helps secure our future.
Sources: Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
So, what do you think? Is OpenAI's adoption of Anthropic's framework a savvy way to catch up in the AI race, or does it raise ethical questions about intellectual property in tech? Could this modular approach lead to even more powerful AI, or might it stifle original innovation by encouraging copycat strategies? Share your thoughts in the comments – do you agree that borrowing ideas is part of healthy competition, or is there a line that shouldn't be crossed? Let's discuss!