Built by attorneys who got tired of counting clauses
ClauseMesh started as an internal tool. It stopped being internal when every in-house lawyer we showed it to asked for a copy.
The problem was obvious. The solution took two years.
In 2022, Rachel Adeyemi was heading legal at a mid-market SaaS company that closed 340 commercial contracts in twelve months. Every agreement went through a manual review workflow: an associate would open the PDF, search for key clauses, copy-paste them into a checklist, and flag anything that deviated from the standard form.
Three contracts in that cohort contained liability caps that fell below the company's minimum threshold. Two of them slipped through. The third was caught by the associate reviewing it for a different reason entirely.
The problem wasn't attorney competence. The problem was that clause extraction is pattern-matching work — exactly the kind of task that should not require an attorney's judgment. Rachel built the first version of ClauseMesh in six months and spent the next eighteen months getting the extraction recall high enough to actually trust it in production.
Three things we don't compromise on
Precision over coverage
A clause that is extracted wrong is worse than a clause that isn't extracted at all — it creates false confidence. We target 94%+ recall on clause types we support and do not expand the taxonomy until the new type meets that bar. We'd rather have 200 reliable clause types than 300 unreliable ones.
Your data stays yours
Corporate contracts contain material non-public information, trade terms, and attorney-client privileged communication. We process that data in isolated environments, do not log clause content, and do not use customer contracts to train or fine-tune any shared model. That's not a marketing claim — it's in the DPA.
Build with legal teams, not for them
Every new extraction type starts with interviews with the attorneys who review those clauses professionally. We don't build features from product intuition alone. Risk rubric design, obligation taxonomy structure, and deviation detection logic all come from working sessions with practicing lawyers.
Where we are as of 2025
What comes next
The current platform handles extraction, scoring, and deviation detection well. The next phase is making those outputs more actionable — specifically, generating draft redlines for the highest-risk deviations using your team's standard fallback positions, not generic AI-suggested language.
We're also expanding the obligation register to include cross-contract conflict detection — identifying when two agreements impose contradictory obligations on the same party, which is a category of risk that standard clause review misses entirely.
The taxonomy will expand to 300 clause types in Q3 2025. Every type we add goes through a 90-day accuracy evaluation before it ships. We'd rather delay a release than put an unreliable extraction type into a legal team's workflow.
We're hiring
We're a small team looking for ML engineers and legal domain experts who want to build in this space. If you care about precision over speed-to-ship, talk to us.