News & Info

The Average Gift Trap That Lets Outliers Run Your Data Story

01/21/2026 a data story, abstract

By Joey Mechelle Farqué, Head of Content, VeraData

Remember that first stats class in school? Mode, mean, average. Then you’re taught about outliers. The weird numbers that can pull an average around like a magnet.

You pass the course, move on, and unless you’re a self-professed data nerd (hi) or work in data analytics, you forget most of it.

Fast-forward to nonprofit fundraising, where you live and die by dashboards. Those same concepts come back, but now they’re tied to revenue, budgets, and board expectations. And the most common mistake is the same one you made in that stats class: You trust the average before you understand the story behind it.

No worries. As the Donor Science people, we’ve got you.

The First-Read Problem

A national nonprofit organization that helps people connect with the beauty of the outdoors sought a smarter way to select recipients for mailings.

They relied on the tried-and-true selection process that had been in place for ages: recent donors with a minimum gift threshold.

The organization then asked VeraData to run a parallel approach.

We didn’t debate philosophy. We ran a fair test. Same offer, same timing, similar mail quantities. Different selection logic.

Our modeled approach didn’t just “perform well.” The results showed a higher response rate, more gifts, and higher overall revenue.

A Raised Eyebrow

Our model was better and more efficient. And yet, the client team’s initial reaction was skepticism.

Why? Because one number looked “bad” at first glance: average gift.

Our average gift was lower, so the immediate story became: “VeraData’s model found lower-value donors.”

That story was intuitive. It was also wrong.

The goal was to acquire $25+ names. When they saw the results, the first read was “average gift is lower.” But the real story was in the gift bands: VeraData drove more $25+ gifts overall, more $50+ gifts, and more $100+ gifts.

Sacrificing response for average gift is dangerous. Break out the granular detail and let the data guide the way.

We got the organization more of the names they wanted, even with a lower average gift. We also brought in a long tail of sub-$20 gifts that further subsidized the campaign (even if they never touched those donors).

When Outliers Hijack the Average

Average gift is a blunt instrument: total dollars divided by the number of gifts. It’s extremely sensitive to outliers.

Here’s the simplest version:

In this campaign, the standard selection segment received a single large outlier gift in a high band that our segment didn’t happen to receive. That one gift inflated their average.

When you remove outliers (a standard way to check whether the average is telling the truth), the picture changes fast: the modeled approach’s advantage becomes clearer, not weaker.

This is the first myth to bust in nonprofit performance reporting:

Myth: A higher average gift means better targeting.

Reality: A higher average gift often means one donor behaved unusually.

The Truth Metric

If you want to know whether targeting worked, you learn to look at the shape of giving, not just a single summary number.

Ask:

In this test, the modeled selection didn’t just “find small gifts.” It produced more gifts, including more gifts at solid everyday levels (think $50+, $100–$250). That’s not a cosmetic win. That’s the base of predictable fundraising.

The VeraData Way

A traditional selection says: “Mail people who gave recently and above $X.”

A VeraData model asks a different question: “Who behaves like people who respond to this kind of appeal?”

Sure, that includes recency and gift history. But, it also includes patterns most rule-based selects ignore:

Then we score a file so you’re not left guessing.

And here’s what most data vendors won’t say out loud: Modeling is only as good as the discipline around it. Holdouts. Validation. Post-campaign readouts. Learning what broke. Updating what drifted.

That’s why “Donor Science” isn’t branding for us. It’s in our DNA and is a part of everything we do. Donor behavior is signal, and signal deserves rigor.

VeraData has been building and refining machine-learning data engines for fundraising for more than 20 years — back when it wasn’t trendy to call everything “AI.” Over that time, we’ve learned the same lesson repeatedly: models improve when you treat them with continuous validation.

The Bigger Lesson: First-Look Data Rarely Tells The Full Story

Fundraising teams are busy. Everyone wants a quick answer. Dashboards reward speed.

The problem is that “quick” metrics are often the ones most easily fooled:

A donor file is a population. Populations have distributions. Distributions have outliers. If you ignore that, you can talk yourself out of a better strategy because one headline stat made you flinch.

If you want to pressure-test your current targeting without drama, we’ll help you set up a clean split test and a readout your CFO (and your future self) will trust.

Our job isn’t to make the first-look numbers feel good. Our job is to build predictable fundraising growth from the truth in the donor data.

Copy link