Skip to content
Reflection Feb 27, 2026 6 min read

I Faked My Own Credibility to Prove a Point

A controlled experiment where I applied the full synthetic authority playbook to my own digital presence — then used my detection engine to analyze the results.


This is the most uncomfortable paper I’ve written.

For six previous papers, I documented how synthetic organizations are built, how they evade detection, and how they corrupt knowledge systems. I built a detection engine. I scanned 1,700+ targets. I published formal frameworks and proposed countermeasures.

Then I realized I had a credibility problem of my own: how do you prove your detection framework works if you’ve never tested it against a known-synthetic target?

So I tested it on myself.

The Experiment

I deliberately applied the full escalation playbook — the same one I documented across my previous research — to my own digital presence. I constructed the complete credibility stack:

  • Published papers on open platforms
  • Built cross-references between publications
  • Created a research identity with institutional framing
  • Established a web presence that signals authority
  • Built verification artifacts (ORCID, DOI registration, institutional affiliations)

Then I pointed Helix Fabric at the result and analyzed my own digital footprint through the same 15-signal detection engine I built to catch synthetic entities.

The 40-Proposition Framework

The paper establishes a 40-proposition framework that maps how fabricated authority escalates from simple to systemic:

At the base level, you have individual fabrications — a fake headshot, a fabricated publication, a synthetic persona. These are the building blocks.

At the middle level, fabrications combine into endorsement structures — entities that validate each other, creating the appearance of independent confirmation. This is where the “loop” forms: A endorses B, B endorses C, C cites A. Each entity appears to have external validation, but the validation network is self-contained.

At the top level, endorsement structures crystallize into synthetic institutions — entities that operate with apparent legitimacy within real ecosystems. They participate in standards processes, partner with real organizations, and influence real decisions.

The 40 propositions trace this escalation path step by step, identifying the specific signals that distinguish legitimate credibility from manufactured credibility at each stage.

What the Detection Engine Found

Here’s where it gets uncomfortable. When I scanned my own digital presence — the one I deliberately constructed using the techniques I documented — the detection engine flagged several signals:

  • Temporal clustering: Multiple publications appeared within a narrow time window
  • Self-referential citation: Papers cite my own prior work extensively
  • Infrastructure correlation: All properties share hosting and deployment patterns
  • Content coherence: The thematic consistency across publications is unusually tight

These are genuine detection signals. They fire correctly. A synthetic entity would exhibit exactly these patterns.

But I’m not synthetic. I’m a real person doing real research. The signals fire because the behavior patterns of a legitimate independent researcher building a body of work and a synthetic entity manufacturing credibility are structurally similar.

The Two Key Findings

Finding 1: Fabricating convincing authority is achievable quickly and inexpensively. The complete credibility stack I constructed took weeks, not years. The cost was negligible. The tools are all publicly available. Any motivated actor could do the same.

Finding 2: Detection is possible — but requires purpose-built verification infrastructure. Surface-level signals can’t distinguish real from synthetic. You need deep signals: archive history analysis, reference graph topology, infrastructure forensics, content provenance verification. These signals exist, but they require systems specifically designed to detect them.

The Uncomfortable Implication

The experiment reveals a structural symmetry between building credibility and fabricating it. The same actions — publishing, creating web properties, building citation networks, establishing institutional identity — serve both legitimate researchers and fabricators.

This means detection systems must go beyond checking what someone has done and examine how and when they did it. The difference between a researcher who published six papers over three years and a fabricator who generated six papers in three days isn’t visible in the publication list. It’s visible in the temporal metadata.

It also means that the credibility systems we rely on — academic publishing, professional networking, standards participation — are not designed to distinguish real participants from synthetic ones. They were built for a world where fabrication was expensive and slow. In a world where fabrication is cheap and fast, their assumptions are broken.

A Self-Referential Proof

There’s a deliberate irony in this work. I used synthetic authority techniques to build the research identity from which I’m publishing research about synthetic authority techniques. The paper is its own evidence.

If you doubt the findings, examine the methodology. If the methodology is sound, the findings stand — regardless of whether the author’s credibility was “naturally” or “deliberately” constructed. That’s the point. In a world of synthetic authority, the proof must be in the work, not the credentials.