Our Methodology

How Views to Values translates your perspectives into a quantified ethical profile—and why we believe transparent methodology is essential for a tool that asks you to examine your values.

“From many views, one understanding.”

In the spirit of E pluribus unum, Views to Values seeks to map the diverse ethical foundations that underlie how we each see the world—not to flatten difference, but to illuminate it. By making our values visible, we can move toward the “more perfect union” that comes from genuinely understanding one another. This page explains exactly how we do that, because a tool that asks for your trust must earn it through transparency.

The Challenge: Quantifying Values

Values are deeply personal and inherently subjective. People don't naturally think about ethics in numerical terms. Simple rating scales (“rate fairness 1–10”) produce unreliable data because individuals anchor differently—one person's “7” is another's “4.” Forced ranking creates artificial hierarchies. The challenge is to surface genuine trade-off preferences in a way that is mathematically rigorous but feels natural to the participant.

Pairwise Choices
Your trade-offs
AHP Weights
Quantified priorities
Relevance Matrices
Value-to-criteria bridge
Scoring
Weighted computation
Your Profile
Ethical perspective

Step 1Discovering Your Values Through Pairwise Comparisons

We use the Analytic Hierarchy Process (AHP), developed by Thomas Saaty in 1980, because it is one of the most rigorous and well-studied methods for quantifying subjective trade-offs. Rather than asking you to rate values on an abstract scale, AHP presents you with direct comparisons: “Which matters more to you, and by how much?”

The Four Ethical Objectives

Minimizing Suffering
Reducing harm, danger, and deprivation
Maximizing Happiness
Promoting flourishing, creativity, and meaningful experiences
Creating Equity
Ensuring fairness, equal opportunity, and social mobility
Preserving Stability
Maintaining institutional trust, traditions, and social cohesion

Each question pits two objectives against each other. You choose which matters more and indicate your conviction strength on a scale from 1 (slightly more) to 9 (overwhelmingly more). The winner receives the intensity value as points; the other receives the reciprocal (1/intensity). This reciprocal scoring is what makes AHP mathematically consistent—if you say A is 7× more important than B, then B is exactly 1/7th as important as A.

Code reference: app/questionnaire/questionnaire-client.tsx — the calculateWeights() function

Step 2Mapping Values to Real-World Criteria

Knowing that someone values equity over stability doesn't directly tell you their view on automating warehouse jobs. We need a bridge—the relevance matrix.

Each theme (AI Labor, Education, Economic Policy) defines a set of measurable criteria. The relevance matrix defines how much each ethical objective cares about each criterion, using values from −1.0 to +1.0:

Positive (+)
This objective favors higher scores on this criterion
Negative (−)
This objective opposes higher scores on this criterion
Zero (0)
This objective is neutral on this criterion

Criteria by Theme

AI Labor (6 criteria)
Physical Risk, Empathy, High Stakes, Creativity, Social Mobility, Institutional Trust
Economic Policy (6 criteria)
Wealth Concentration, Economic Dynamism, Basic Needs, Institutional Trust, Feasibility, Local Autonomy
Education (5 criteria)
Career Readiness, Social Development, Critical Thinking, Practical Utility, Civic Responsibility

Code reference: lib/theme-config.ts — relevance matrices and criteria definitions for all themes

Share feedback on relevance matrices

Step 3Bringing It All Together: The Scoring Formula

With your ethical weights and the relevance matrix, we compute a personalized score for every item (occupation, policy, or education approach). The formula combines your priorities with each item's characteristics:

// For each criterion c in the theme:
criterionWeight = Σ(objectiveWeighto / 100 × relevanceMatrix[o][c])
totalScore += itemScorec × criterionWeight
// Normalize to 0-100 scale:
normalizedScore = 50 + totalScore × 5
result = clamp(normalizedScore, 0, 100)

Why This Normalization?

The 50 + totalScore × 5 formula is a deliberate design choice. The midpoint of 50 represents neutrality. The ×5 scaling factor spreads results across the full 0–100 range without over-clustering results near the center. We deliberately chose a normalization that produces a reasonable distribution where different value profiles yield meaningfully different scores, while avoiding artificial polarization that would make every item either 0 or 100.

This is one of the most important challenges in the methodology: finding the balance between differentiation and accuracy. Too little spread and every item scores near 50, making the tool useless. Too much spread and the results feel exaggerated. The current calibration is a starting point—community feedback on whether score distributions feel meaningful is critical to refining this parameter.

Code reference: lib/theme-config.ts — the calculateItemScore() function

Share feedback on the scoring formula

Step 4Your Ethical Profile

Your weight distribution maps to a named ethical profile based on your primary and secondary objectives. This helps create a recognizable identity for your ethical perspective, making it easier to understand and compare with others.

Code reference: lib/relevance-matrix.ts getProfileName() and getEducationPerspective()

A Starting Point, Not a Final Answer

This methodology is a starting point for continuous refinement. The relevance matrices, the scoring formula, the normalization parameters, and the question design all benefit from community feedback. We are searching for a more accurate model—one that faithfully reflects the many different views that come together to form our collective consciousness.

Feedback from users about the evaluated weight of themes, predicted views, and analysis results is critical to this process. If your results don't match your self-understanding, that is valuable data. It might mean:

  • A relevance matrix value needs recalibration
  • The scoring normalization needs adjustment for this theme
  • A criterion is missing or weighted incorrectly
  • The pairwise questions aren't capturing a nuance in your values

Every piece of feedback moves us closer to a model that accurately predicts how different value systems lead to different perspectives—which is the entire purpose of this tool.

Future Directions

  • Integrating deliberative polling techniques that let groups collectively explore trade-offs
  • Developing longitudinal tracking to show how individual and community values evolve over time
  • Applying machine learning to identify hidden patterns in how values cluster across demographics
Share feedback on the methodology

The Autonomous Ethics Companion

Our primary goal is to build an autonomous AI companion that evolves this platform — collecting feedback through email, chat, and in-app engagement; devising methodology improvements; and developing new content informed by real community interaction.

Share feedback on the vision

Open Source: See the Code, Shape the Future

Views to Values is fully open source. Every formula, every relevance matrix, every scoring decision is visible in the code. We believe that a tool asking people to examine their values must itself be open to examination.

Key Source Files

  • Scoring Engine: lib/theme-config.ts — calculateItemScore(), relevance matrices, criteria definitions
  • Profile Matching: lib/relevance-matrix.ts — getProfileName(), education perspectives
  • Questionnaire Logic: app/questionnaire/questionnaire-client.tsx — AHP weight calculation
  • AI Analysis: app/api/analyze-submission/route.ts — theme-specific analysis with Gemini

We welcome contributions of all kinds: bug reports, relevance matrix refinements, new theme proposals, academic review of the methodology, and ideas for improving the user experience. If you have expertise in decision science, ethics, survey design, or data analysis, your perspective is especially valuable.

E Pluribus Unum

The premise of Views to Values is not that we should all agree. It is that disagreement becomes productive when we understand its roots.

Every person who completes this assessment adds a perspective to our collective understanding of how values shape views. Together, these many perspectives form something larger than any individual view—a map of our shared ethical landscape.

This is the essence of E pluribus unum: not uniformity, but unity through understanding. A more perfect union is not one where everyone thinks alike, but one where we see clearly why we think differently—and choose dialogue over dismissal.