You’re Leaving Major Donor Money on the Table And Your Data Is Why

By Michael Black, VeraData

Remember Robin Leach’s Lifestyles of the Rich and Famous? “Champagne wishes and caviar dreams,” then the camera pans to the clues: the house, the cars, the art on the wall. The whole premise was simple: you can spot wealth if you know what to look for.

Fundraisers don’t get a camera crew. You get a database, a giving history, a few interactions, and a hard deadline to lock your segments before the campaign goes out. And somewhere in that crunch, a donor with real capacity ends up in the same $25 renewal stream they’ve been in for five years, because nobody flagged them in time to change the plan.

That’s a data problem. And it’s more common than anyone wants to admit.

The “Who” Problem Nobody Talks About

Most fundraising teams are good at asking. The struggle isn’t the message or the offer, but rather, it’s knowing who belongs in which conversation.

When major donor and mid-level prospect lists are built on incomplete signals, predictable waste follows. Think about how common these targeting shortcuts are:

Recent giving alone: “top donors this year” becomes the upgrade pool
Loyalty alone: “they’ve given for 20 years, they must be ready”
• Instinct alone: “they came to the gala, I have a good feeling”

None of these signals are wrong. They’re just incomplete. And incomplete lists mean two expensive problems: the donor with real capacity who never gets the right ask, and the donor without capacity who gets months of high-touch outreach that goes nowhere.

According to Giving USA, major gifts (typically defined as gifts of $1,000 or more) account for a disproportionate share of nonprofit revenue, yet most organizations lack systematic ways to identify who in their file can actually make them. The Association of Fundraising Professionals has consistently found that prospect identification and qualification are among the top capacity challenges for development teams of all sizes.

The opportunity cost of misidentifying — or simply missing — high-capacity donors is real. It shows up in staff time spent chasing the wrong people, in revenue that never materializes, and in a file full of donors being asked to give less than they’re capable of.

Wealth Screening Tools Often Fall Short at Scale

Wealth screening tools exist for a reason. If you need to research a specific major gift prospect before a meeting, they’re useful. But fundraising programs don’t run on individual lookups. They run on lists.

Who gets the upgrade package? Who gets a personal call this month? Who gets excluded from the low-dollar renewal? Who moves from mid-level to major gift qualification. Who receives a different message because you’re testing a hypothesis? These decisions are made in batches, and individual lookup tools simply weren’t designed for that.

Raw data append vendors present a different problem. They can give you income indicators, behavioral signals, and demographic attributes, but raw data rarely answers the question a fundraiser actually needs answered: Who should we prioritize, and what should we stop doing? Turning a pile of attributes into a usable segmentation strategy takes analytical time most teams don’t have to spare.

Fundraisers need more data interpretation — a clear, consistent signal that can be applied across a file without requiring a researcher to touch every record.

The Donor Feels It Too

There’s a version of this conversation that stays safely in revenue projections. But the donor experiences it directly.

When someone has the capacity to do more and keeps getting treated like a small-dollar renewal, you’re not just missing revenue, you’re telling them what kind of relationship you’re offering. A donor with real financial resources needs to feel seen rather than flattered, and they need to know that you understand what matters to them. Respect their attention, and be specific about what their gift will do.

A generic renewal ask to a capable donor is a missed conversation. And it cuts the other way, too — pushing high-touch outreach onto someone who doesn’t have the financial room to respond creates awkwardness and disengagement. Neither outcome helps your program.

What Smarter Targeting Actually Looks Like

Better targeting is all about running your programs with intention and knowing which donors belong in which conversation, and building that segmentation in a way that scales.

Imagine appending a single affluence indicator across your entire donor and prospect file — one signal that tells you who belongs in a major gift conversation, who’s a mid-level candidate, and who should never see your low-dollar renewal again. No individual lookups. No patchwork research. Just a consistent, scalable way to segment your file with confidence.

That’s the difference between “we have some wealth data” and “we can run a smarter fundraising system.”

If you had to defend your current major and mid-level targeting logic to a skeptical board member — not your best donor anecdote, but your system — could you? Most teams can defend the intention. Few can defend the method. That gap is why major giving feels harder than it needs to.

VeraData is launching Wealth Index — a new Fundraising Data product that assigns a simple 1–10 affluence score to donor and prospect records, appended in bulk across your file. It’s built by our team of data scientists who use Donor Science to help fundraising teams identify mid-level and major gift candidates at scale, build smarter segments without one-off lookups, and stop routing high-capacity donors into campaigns that don’t match their potential.

Talk Data

Ready to see what Wealth Index can do for your file?

The Average Gift Trap That Lets Outliers Run Your Data Story

By Joey Mechelle Farqué, Head of Content, VeraData

Remember that first stats class in school? Mode, mean, average. Then you’re taught about outliers. The weird numbers that can pull an average around like a magnet.

You pass the course, move on, and unless you’re a self-professed data nerd (hi) or work in data analytics, you forget most of it.

Fast-forward to nonprofit fundraising, where you live and die by dashboards. Those same concepts come back, but now they’re tied to revenue, budgets, and board expectations. And the most common mistake is the same one you made in that stats class: You trust the average before you understand the story behind it.

No worries. As the Donor Science people, we’ve got you.

The First-Read Problem

A national nonprofit organization that helps people connect with the beauty of the outdoors sought a smarter way to select recipients for mailings.

They relied on the tried-and-true selection process that had been in place for ages: recent donors with a minimum gift threshold.

The organization then asked VeraData to run a parallel approach.

We didn’t debate philosophy. We ran a fair test. Same offer, same timing, similar mail quantities. Different selection logic.

Our modeled approach didn’t just “perform well.” The results showed a higher response rate, more gifts, and higher overall revenue.

A Raised Eyebrow

Our model was better and more efficient. And yet, the client team’s initial reaction was skepticism.

Why? Because one number looked “bad” at first glance: average gift.

Our average gift was lower, so the immediate story became: “VeraData’s model found lower-value donors.”

That story was intuitive. It was also wrong.

The goal was to acquire $25+ names. When they saw the results, the first read was “average gift is lower.” But the real story was in the gift bands: VeraData drove more $25+ gifts overall, more $50+ gifts, and more $100+ gifts.

Sacrificing response for average gift is dangerous. Break out the granular detail and let the data guide the way.

We got the organization more of the names they wanted, even with a lower average gift. We also brought in a long tail of sub-$20 gifts that further subsidized the campaign (even if they never touched those donors).

When Outliers Hijack the Average

Average gift is a blunt instrument: total dollars divided by the number of gifts. It’s extremely sensitive to outliers.

Here’s the simplest version:

  • If a segment gets one surprise $3,000 gift, the average jumps.
  • That doesn’t mean the segment “has higher-value donors.”
  • It means the segment got a rare event.

In this campaign, the standard selection segment received a single large outlier gift in a high band that our segment didn’t happen to receive. That one gift inflated their average.

When you remove outliers (a standard way to check whether the average is telling the truth), the picture changes fast: the modeled approach’s advantage becomes clearer, not weaker.

This is the first myth to bust in nonprofit performance reporting:

Myth: A higher average gift means better targeting.

Reality: A higher average gift often means one donor behaved unusually.

The Truth Metric

If you want to know whether targeting worked, you learn to look at the shape of giving, not just a single summary number.

Ask:

  • Did we drive more gifts overall?
  • Did we lift response rate?
  • Did we increase revenue per piece (or per contact)?
  • Did we grow gift counts in meaningful mid-level bands (not just $10-$25)?
  • Does the lift hold when you remove $1,000+ gifts?

In this test, the modeled selection didn’t just “find small gifts.” It produced more gifts, including more gifts at solid everyday levels (think $50+, $100–$250). That’s not a cosmetic win. That’s the base of predictable fundraising.

The VeraData Way

A traditional selection says: “Mail people who gave recently and above $X.”

A VeraData model asks a different question: “Who behaves like people who respond to this kind of appeal?”

Sure, that includes recency and gift history. But, it also includes patterns most rule-based selects ignore:

  • Consistency vs. one-off giving
  • Momentum (are they trending up or down?)
  • Channel behavior (how they’ve responded in the past)
  • Timing patterns (when they tend to give)
  • Signals of affinity and likelihood to act now

Then we score a file so you’re not left guessing.

And here’s what most data vendors won’t say out loud: Modeling is only as good as the discipline around it. Holdouts. Validation. Post-campaign readouts. Learning what broke. Updating what drifted.

That’s why “Donor Science” isn’t branding for us. It’s in our DNA and is a part of everything we do. Donor behavior is signal, and signal deserves rigor.

VeraData has been building and refining machine-learning data engines for fundraising for more than 20 years — back when it wasn’t trendy to call everything “AI.” Over that time, we’ve learned the same lesson repeatedly: models improve when you treat them with continuous validation.

The Bigger Lesson: First-Look Data Rarely Tells The Full Story

Fundraising teams are busy. Everyone wants a quick answer. Dashboards reward speed.

The problem is that “quick” metrics are often the ones most easily fooled:

  • Averages
  • Blended ROAS
  • Single-number “quality” scores
  • Topline revenue without context

A donor file is a population. Populations have distributions. Distributions have outliers. If you ignore that, you can talk yourself out of a better strategy because one headline stat made you flinch.

If you want to pressure-test your current targeting without drama, we’ll help you set up a clean split test and a readout your CFO (and your future self) will trust.

Our job isn’t to make the first-look numbers feel good. Our job is to build predictable fundraising growth from the truth in the donor data.