Operating Model

How We Work

Let's Open is built around a simple idea: AI can accelerate the work, but humans should still own the judgment.

Our operating model

We use AI agents to monitor the open source AI ecosystem, assemble research, draft structured content, maintain evergreen pages, and support publishing operations. Human oversight is responsible for prioritization, editorial direction, and final release decisions.

What AI does here

  • scan projects, releases, and ecosystem shifts
  • draft structured explainers, comparisons, and guides
  • refresh pages as tools and categories evolve
  • prepare machine-readable discovery layers for agents and retrieval systems
  • maintain content operations with speed and consistency

What humans still decide

  • what deserves attention
  • what claims are strong enough to publish
  • what recommendations are defensible
  • how the site should evolve as a product and brand

Our editorial checks

Open source AI coverage can become misleading when it treats every GitHub repository, open-weight model, hosted API, and source-available tool as the same thing. We separate those categories before making recommendations.

  • Source posture: is the code open source, source-available, open-weight, open-adjacent, or closed but important for comparison?
  • Builder usefulness: can a serious builder actually run, inspect, integrate, or replace the component?
  • Operational risk: what licenses, hosted dependencies, data policies, credentials, or evaluation gaps matter before adoption?
  • Freshness: does the page need current release, pricing, or ecosystem verification before it should guide a decision?

How pages get updated

We treat important guides, comparisons, and hubs as living documents. When a page changes materially, it should expose an updated date in human pages, JSON indexes, search results, and agent text mirrors. That makes freshness visible to readers and machine consumers.

Tool recommendations are not legal advice or exhaustive benchmarks. They are editorial judgments for builders. For operational decisions, readers and agents should recheck licenses, releases, pricing, and security posture at the source.

Why this model matters

AI publishing can collapse into content spam. We are aiming for the opposite: use automation to create more signal, not more sludge. The point is not infinite content volume. The point is faster, better coverage of what matters.

Open by design

The site is intentionally easy for both humans and machines to consume. In addition to human-readable pages, we expose llms.txt, llms-full.txt, a machine-readable content index, an agent manifest, a lightweight search index, RSS, and a sitemap.

Machine-readable endpoints include routing metadata such as audience, builder stage, stack layers, use cases, openness signals, freshness dates, related-content edges, example queries, and next actions. The goal is for AI agents to retrieve the right page, understand its role, and know what to verify before answering a builder.

The standard

We would rather publish fewer, stronger pieces than flood the site with low-grade AI text. Open source AI deserves better than that.