The Average Gift Trap That Lets Outliers Run Your Data Story

By Joey Mechelle Farqué, Head of Content, VeraData

Remember that first stats class in school? Mode, mean, average. Then you’re taught about outliers. The weird numbers that can pull an average around like a magnet.

You pass the course, move on, and unless you’re a self-professed data nerd (hi) or work in data analytics, you forget most of it.

Fast-forward to nonprofit fundraising, where you live and die by dashboards. Those same concepts come back, but now they’re tied to revenue, budgets, and board expectations. And the most common mistake is the same one you made in that stats class: You trust the average before you understand the story behind it.

No worries. As the Donor Science people, we’ve got you.

The First-Read Problem

A national nonprofit organization that helps people connect with the beauty of the outdoors sought a smarter way to select recipients for mailings.

They relied on the tried-and-true selection process that had been in place for ages: recent donors with a minimum gift threshold.

The organization then asked VeraData to run a parallel approach.

We didn’t debate philosophy. We ran a fair test. Same offer, same timing, similar mail quantities. Different selection logic.

Our modeled approach didn’t just “perform well.” The results showed a higher response rate, more gifts, and higher overall revenue.

A Raised Eyebrow

Our model was better and more efficient. And yet, the client team’s initial reaction was skepticism.

Why? Because one number looked “bad” at first glance: average gift.

Our average gift was lower, so the immediate story became: “VeraData’s model found lower-value donors.”

That story was intuitive. It was also wrong.

The goal was to acquire $25+ names. When they saw the results, the first read was “average gift is lower.” But the real story was in the gift bands: VeraData drove more $25+ gifts overall, more $50+ gifts, and more $100+ gifts.

Sacrificing response for average gift is dangerous. Break out the granular detail and let the data guide the way.

We got the organization more of the names they wanted, even with a lower average gift. We also brought in a long tail of sub-$20 gifts that further subsidized the campaign (even if they never touched those donors).

When Outliers Hijack the Average

Average gift is a blunt instrument: total dollars divided by the number of gifts. It’s extremely sensitive to outliers.

Here’s the simplest version:

  • If a segment gets one surprise $3,000 gift, the average jumps.
  • That doesn’t mean the segment “has higher-value donors.”
  • It means the segment got a rare event.

In this campaign, the standard selection segment received a single large outlier gift in a high band that our segment didn’t happen to receive. That one gift inflated their average.

When you remove outliers (a standard way to check whether the average is telling the truth), the picture changes fast: the modeled approach’s advantage becomes clearer, not weaker.

This is the first myth to bust in nonprofit performance reporting:

Myth: A higher average gift means better targeting.

Reality: A higher average gift often means one donor behaved unusually.

The Truth Metric

If you want to know whether targeting worked, you learn to look at the shape of giving, not just a single summary number.

Ask:

  • Did we drive more gifts overall?
  • Did we lift response rate?
  • Did we increase revenue per piece (or per contact)?
  • Did we grow gift counts in meaningful mid-level bands (not just $10-$25)?
  • Does the lift hold when you remove $1,000+ gifts?

In this test, the modeled selection didn’t just “find small gifts.” It produced more gifts, including more gifts at solid everyday levels (think $50+, $100–$250). That’s not a cosmetic win. That’s the base of predictable fundraising.

The VeraData Way

A traditional selection says: “Mail people who gave recently and above $X.”

A VeraData model asks a different question: “Who behaves like people who respond to this kind of appeal?”

Sure, that includes recency and gift history. But, it also includes patterns most rule-based selects ignore:

  • Consistency vs. one-off giving
  • Momentum (are they trending up or down?)
  • Channel behavior (how they’ve responded in the past)
  • Timing patterns (when they tend to give)
  • Signals of affinity and likelihood to act now

Then we score a file so you’re not left guessing.

And here’s what most data vendors won’t say out loud: Modeling is only as good as the discipline around it. Holdouts. Validation. Post-campaign readouts. Learning what broke. Updating what drifted.

That’s why “Donor Science” isn’t branding for us. It’s in our DNA and is a part of everything we do. Donor behavior is signal, and signal deserves rigor.

VeraData has been building and refining machine-learning data engines for fundraising for more than 20 years — back when it wasn’t trendy to call everything “AI.” Over that time, we’ve learned the same lesson repeatedly: models improve when you treat them with continuous validation.

The Bigger Lesson: First-Look Data Rarely Tells The Full Story

Fundraising teams are busy. Everyone wants a quick answer. Dashboards reward speed.

The problem is that “quick” metrics are often the ones most easily fooled:

  • Averages
  • Blended ROAS
  • Single-number “quality” scores
  • Topline revenue without context

A donor file is a population. Populations have distributions. Distributions have outliers. If you ignore that, you can talk yourself out of a better strategy because one headline stat made you flinch.

If you want to pressure-test your current targeting without drama, we’ll help you set up a clean split test and a readout your CFO (and your future self) will trust.

Our job isn’t to make the first-look numbers feel good. Our job is to build predictable fundraising growth from the truth in the donor data.

Your Brand Isn’t the Issue, Probably

By Brooke Sconyers, VP Marketing, VeraData

Fundraising runs on donor trust, and trust runs on clarity and proof.

A strong brand helps people recognize you. It can make you feel familiar. It can make your work look competent. It does not automatically remove donor doubt.

Most donors aren’t sitting there thinking about brand architecture. They’re deciding whether the promise in front of them feels real, understandable, believable, and safe enough to fund. The research and data (aka Donor Science) backs that up: donors consistently rank things like accurate appeals, accountability signals, and protection of donor information as top priorities.

So if a campaign looks great but still underperforms, it’s often because the donor experience asks people to take a leap without enough footing.

Trust Friction Tends to Look Boring (and that’s the point)

When trust is strong, donors don’t feel like they’re “being sold.” They feel like they understand what’s happening. When trust is weak, donors hesitate for reasons that are rarely poetic:

  • The ask is hazy, so the donor can’t picture what their gift does.
  • The proof is hard to find, or it reads like marketing copy that never meets the ground.
  • The follow-up is either generic or late, making the organization feel less in control than the story implied.
  • Data privacy feels like an unknown risk.

That last one is not small. Give.org has reported that if a charity a donor supports appears in the news for being hacked and having donor data stolen, 22.5% say they would stop donating, and 51.7% say they would hold off until satisfied the issue is resolved.

Here’s the uncomfortable reality: trust can drop faster from a back-end failure than from any front-end branding problem.
Trust shows up in what donors do next: whether they give again, upgrade, or stay. The metrics don’t care how pretty the campaign was.

Data is How You Keep Your Trust Story from Drifting Into Vibes

Data doesn’t replace brand. It keeps the brand honest.

It helps you answer the questions donors are quietly asking, without turning your appeal into a spreadsheet:

  • What outcomes did you produce last year?
  • What does it cost to do the work, and why?
  • Which programs changed because donors funded them?
  • What happens after I give, and how will you show me?

Research on donor perceptions aligns with this: financial transparency is associated with donor trust, and trust is associated with perceived performance.

That doesn’t mean donors want every line item. It means they respond to signals that the organization is accountable, clear, and not hiding the ball.

Our Creative Science experts at Teal Media believe that design earns attention. Specificity earns belief. Our job is to make the truth feel human without sanding off the details that make it credible.

When a Gorgeous Arts Brand Still Can’t Close the Gift

Arts organizations are a perfect test case because many already have strong aesthetics and a clear identity.

If a year-end appeal relies on broad language — support the arts, sustain excellence, keep culture alive — it may resonate emotionally but still fail to convert, because the donor doesn’t know what their gift will do.

A trust-forward version of an appeal keeps the beauty and adds decision-grade clarity:

  • A donor can fund a defined outcome, such as student matinees, a teaching-artist series, instrument repair, subsidized tickets, access programming, or commissioning new work.
  • Impact is surfaced where it matters: on the landing page, in the mail package, in the receipt, and in the follow-up.
  • The organization tells the truth in plain language about what it costs to run programs well.
  • Stewardship matches the promise: the donor gets a short update tied to what they funded, not a generic newsletter blast.

This is where Donor Science helps the brand story. You can test which outcomes different segments respond to, which messages drive higher second-gift rates, and where the journey drops off — from ad to landing page, landing page to form, form to thank-you, thank-you to retention.

And that testing loop is how you stop guessing. Our Media Science experts at Faircom New York often suggest that, “Donors build confidence through repetition. When the message shifts from an ad to email to landing page, it creates doubt. Consistency reduces friction,” explains Lindsay Marino Long, Vice President, Donor Engagement & Retention at Faircom New York.

Engineering Credibility the VeraData Way

We start with the unglamorous part: data integrity. Traditional attribution and KPI reporting can be distorted when data is inconsistently coded, labeled, segmented, and measured. Donor Science focuses on capturing accurate, complete, timely, and consistent inputs, including granular signals most programs miss, and then standardizing them so the decisions hold up at scale.

Donor Science keeps the story provable and testable by anchoring it in audience behavior, performance data, and the drivers of retention. Creative Science makes the story clear and emotionally accurate without getting lost in the fog. Media Science (Faircom New York) ensures the story shows up in the right places with enough consistency to build familiarity without creating noise.

Stewardship is the proof step, and the part that confirms the organization keeps the promise after the gift. Give.org’s work on donor trust and accountability themes repeatedly underscores how much donors care about honesty in appeals, responsible practice, and the protection of their information.

The appeal is the promise. Stewardship is where donors decide whether to believe you next time.

OK, Sure …

Brand polish helps. It just can’t do the heavy lifting on its own.

Trust is built when your message stays clear, your proof is easy to find, your experience remains consistent across channels, and your organization behaves as if it respects donors — especially with their data.