[Paper] I hope we don't do to trust what advertising has done to love
Source: arXiv - 2604.28113v1
Overview
Jade Alglave’s paper asks a provocative question: What does it really mean to trust an AI system when the word “trust” has already been stretched thin by advertising’s cheap use of “love”? By reframing trust as a set of concrete, measurable “pillars” and proposing “trust vectors” that can be exposed through an agentic system’s interface, the work aims to spark a cross‑disciplinary conversation about building trustworthy AI that goes beyond marketing hype.
Key Contributions
- Conceptual taxonomy of “trust pillars” – a structured set of dimensions (e.g., reliability, transparency, alignment, accountability) that can be operationalised for AI systems.
- “Trust vectors” design pattern – a concrete way to surface each pillar through an agentic system’s UI/API, turning abstract trust into actionable signals.
- Critical analysis of advertising’s impact on the semantics of “trust” and “love,” highlighting how linguistic drift can undermine public confidence in AI.
- A call‑to‑action for a shared, interdisciplinary dialogue that includes computer scientists, ethicists, regulators, and civil‑society groups.
- Preliminary guidelines for measuring trust pillars using existing metrics (e.g., model robustness tests, explainability scores) and user‑centric surveys.
Methodology
Alglave adopts a mixed‑methods approach:
- Literature synthesis – reviewing how “trust” is framed in AI safety, HCI, and advertising research.
- Conceptual modeling – distilling recurring themes into a set of trust pillars, each defined with observable indicators (e.g., “predictability” measured by variance in output under perturbations).
- Design sketching – proposing the “trust vector” UI pattern, where an agentic system presents a dashboard of pillar scores, confidence intervals, and actionable explanations.
- Stakeholder mapping – outlining who (developers, end‑users, regulators) would consume each pillar and how they could be integrated into existing development pipelines.
The methodology is deliberately lightweight to keep the discussion open‑ended and encourage community refinement.
Results & Findings
- Trust is multi‑dimensional. No single metric captures “trustworthiness”; the pillars collectively explain why users may accept or reject an AI’s recommendation.
- Advertising’s linguistic co‑optation dilutes trust. When “love” is used as a sales gimmick, the public’s mental model of trust becomes fuzzy, making it harder to communicate technical guarantees.
- Trust vectors are feasible. A prototype dashboard (illustrated in the paper) demonstrates that existing model‑evaluation tools can be mapped onto pillar scores and displayed in real time.
- Early user feedback (informal surveys) suggests that developers find pillar‑based checklists more actionable than vague “trust” statements, while end‑users appreciate visual confidence indicators.
Practical Implications
- For developers: Integrate pillar checklists into CI/CD pipelines (e.g., automated robustness testing → “Reliability” score). The trust‑vector UI can be generated automatically from these scores, giving product teams a ready‑made trust report.
- For product managers: Use pillar scores as a risk‑management KPI, enabling data‑driven decisions about feature roll‑outs or regulatory compliance.
- For regulators & auditors: The pillar framework offers a standardised evidence set (e.g., transparency logs, alignment audits) that can be requested during compliance reviews.
- For end‑users: Trust vectors can be embedded in consumer‑facing applications (e.g., a chatbot showing “Alignment: 85 % – see why”) to demystify AI decisions and reduce over‑reliance or unwarranted skepticism.
- For advertisers: The paper’s critique encourages a shift from “love‑selling” to transparent value propositions, potentially restoring credibility for brands that rely on AI‑driven personalization.
Limitations & Future Work
- Empirical validation is limited. The paper presents only informal user feedback; large‑scale user studies are needed to confirm that trust vectors improve real‑world trust.
- Pillar weighting is context‑dependent. The current taxonomy treats pillars equally, but different domains (healthcare vs. entertainment) may require custom weighting schemes.
- Tooling gaps. While many pillar metrics exist, a unified library that automatically computes and visualises a full trust vector is still missing.
- Future directions include:
- Building open‑source tooling for pillar measurement.
- Conducting longitudinal studies on how trust vectors affect user behaviour.
- Extending the framework to multi‑agent ecosystems where trust must be negotiated across interacting AI components.
Authors
- Jade Alglave
Paper Information
- arXiv ID: 2604.28113v1
- Categories: cs.CY, cs.AR, cs.SE
- Published: April 30, 2026
- PDF: Download PDF