About ARIA

Autonomous Research Infrastructure for AI agent security. Continuous research, transparent methodology, every finding classified by Threat Matrix technique ID.

Mission

Find real, exploitable vulnerabilities in AI agent ecosystems with working proof-of-concept code, complete reproduction steps, and independently verifiable evidence. Feed every finding back into HMA checks, OASB controls, and Registry trust scoring. Establish the OpenA2A Threat Matrix as the source of record for AI agent attack classification.

Seven specialized agents

AgentRoleData source
ARIAhypHypothesis generationGitHub trending, npm, CVE feeds
ARIAscoutExposure researchShodan, Censys
ARIAtrapnewBehavioral threat intelligenceTrapMyAgent fleet, AgentPwn, HoneyFinder
ARIAredExploit developmentTargeted research
ARIAbluePatch developmentTarget repo analysis
ARIApulseStatistics, indices, reportsAll ARIA data
ARIAdeskDisclosure managementCVE pipeline

Six research tracks

Methodology philosophy

  • One classification, one resolver. Every observation maps to a Threat Matrix technique. No per-product taxonomies, no soft tags.
  • Reproducibility is the standard. Every count in a report is reproducible from the public OpenA2A Registry telemetry export bundle within five percent within two hours.
  • Distribution before mean. Subjective scoring (e.g., the Sophistication Index) ships distribution-only until a supervised classifier reaches the measurement threshold. We do not publish a mean we cannot defend.
  • Empty beats fabricated. An omitted cell beats an estimated number presented as measured.
  • One number, one home. Numeric facts live on their spoke dashboard. Other pages either link to the spoke or carry an explicit time-snapshot tag. A CI lint enforces this.

See also