by Judith LeRoy and David LeRoy
infop@cket 11, March 1995
Pledge is upon us...again. For many stations, no matter what time of year it might be, pledge is always just around the corner.
Pledge Is Necessary
A substantial amount of the financial largesse, and most of the new names for the stations' fundraising lists, come from the much-maligned pledge drive. A wealth of station resources - money, time and energy - is devoted to pledge drives. These excursions into the world of telethon must be as successful as possible: the acquisition budget for next year, the travel budget, the new copier - all may depend on the generosity of the community. Local dollars enable stations to exist on a somewhat better than subsistence level.
Pledge Is Good...?
Friend and foe of the pledge drive agree: Pledge will not disappear in the foreseeable future. Thus, the goal must be to find ways to make pledge drives more efficient for the stations and more palatable for the viewers. There are, indeed, many positive indicators. We have gathered a significant amount of data that show people watch public television during pledge drives. What's more, those data indicate that people watch pledge breaks. A fair proportion of those viewers, in fact, watch many pledge breaks during the course of a fund drive. We know that members view significantly more public television than non-members do. We have data that suggest perhaps watching "more" public television renders a viewer more vulnerable to pledge pleas. We have evidence mounting that public television, like public radio, has a "core" membership - folks so attuned to public television that they find affinity of appeals in programs of different genres and types - nature, how-to's, documentaries and dramas; and they eclectically watch these programs and their accompanying pledge appeals with apparent relish and appreciation.
The importance of pledge as a reinforcing vehicle for core viewers and members is becoming evident. These folks tell you the best part of pledge is the "good programs." Pledge breaks are the most obvious example of local programming at many public television stations. As such, they should be orchestrated as carefully and skillfully as a national co-production or local documentary to ensure that they are effectively attracting new supporters, reinforcing old, loyal members, and representing quality television in the local market.
Well and good. Easily spoken. Most public television professionals have happily or grudgingly conceded that pledge is necessary for the financial well-being of public television. Even the staunchest critics don't think that pledge will disappear. Long-term financial studies show that the amount of revenue gained from pledge-acquired members surpasses that of revenue from members acquired by other means. Pledge is the chief source of new members to the station.
We stand to benefit by making pledge more effective. There are tools, studies, and projects being designed and executed that will enhance productivity of pledge drives. Lessons we learn from these efforts will, we hope, bear fruit in the near future. Public television must work hard to improve pledge appeals and mechanics to make pledge drives better.
But, the truth is, there is another variable at work that helps or foils pledge success. We have had, for many years now, a primary piece of evidence that could improve pledge. It is a truism that some stations have used, to good advantage, to maximize pledges in local markets.
Pledge and Audience: You Can't Have One Without the Other...
That truism is: Audience size affects pledge success. Note, here we are referring to the number of pledges per break. The dollar amounts raised per minute are influenced by any number of extraneous variables - premiums, price points, etc. But substantial data indicate that an increase in audience size at a station usually is accompanied by an increase in the number of pledges during a pledge drive.
In the above graph, full-day average ratings during pledge drives (metered markets) since 1988 have been correlated with average number of pledges (all stations). Is it a surprise that the lines vary almost exactly in the same direction? When full-day GRPs rise, so do pledges. When GRPs decline, so do pledges. A perfect correlation is 1.0. The correlation on our graph is .83. In statistics, this high a correlation is a very rare occurrence. What does it mean? In a short simple sentence, the total number of pledges varies with the total size of the audience. In a pledge period when more people view more often, more pledges arrive over the stations' transoms. This graph uses systemwide data...general data - full day GRPs and total pledges. What happens in individual markets with specific programs? We have heard from development directors, for Œlo these many years now, that there is no relationship between audience size and pledge.
"Programs with tiny audiences get phones ringing off the hook; the big audience programs don't get any calls."
Is that true? If it were, it would cast aspersions on our ratings/pledge assertion.
We ran correlations between individual programs' ratings and the number of pledges the programs received in metered markets - markets where December and March data are available. Initial studies indicated significant correlations between pledges and ratings for individual programs.
We sent the study to folks at stations around the system. Several called to tell us that our method was faulty: we had included programs in our correlations that stations weren't seriously using to solicit pledging. In other words, we were counting pledges for programs that weren't necessarily designed for pledge and pledges for programs for which pledges were not being solicited, and that had the effect of erroneously lowering our correlations. Thanking our critics and counselors, we ran our correlations again, removing children's programs and series that didn't have pledge specials. We did our correlations only for pledge programs. And our correlations jumped significantly, as the following table shows:
| Station | Rating/Pledge Correlation |
| Washington DC | .94 |
| Boston | .71 |
| Philadelphia | .83 |
| Phoenix | .78 |
Below is an example of the correlation technique using WETA data. Pledge numbers and ratings are graphed.The hills and valleys on this WETA Ratings/Pledge Graph correspond almost exactly. Rather convincing? We think so. Pledge programs that get higher ratings apparently also get more pledges. Remember that the programs we designate as "pledge" programs are programs intended for pledge...programs that were designed to elicit an emotional response that would move people toward the phones.
We agree that there are anomalies for this rule of thumb. But even some of those aberrations can be explained. A classic example of a program with small ratings that makes lots of money is Bradshaw. But stop and think. While Bradshaw's ratings may be minuscule, the program generally goes on for hours, and hours...so even though the program only elicits a one rating per half hour, if it continues with that one rating for four hours, it has elicited the equivalent of eight individual half-hour rating points - or eight half-hour gross rating points - the same amount of GRPs that a Nature special would accumulate in one hour with a 4 rating each half hour. (And we all agree that a 4-rated program is nothing to sneeze about in public television.)
So when we correlate pledges and ratings, we must remember that small audiences that build over many hours should result in more pledges because they result in high GRP figures - total audience. And these low-rated, long-playing programs have another pledge advantage: part of that audience - which was interested enough o watch for all those hours - would be more inclined to pledge than the average audience because frequency (additive viewing) is related to loyalty and loyalty to pledging. Further, for programs like Bradshaw, the audience is what we call a "transactional" audience. Most of the pledgers wanted the premium and that encouraged a decision to pledge. The evidence, for us, was accumulating. Our tests kept showing a relationship between audience size and number of pledges.
We contrived yet another test. We know markets differ in program preference - some programs play to large audiences in one market only to play to empty seats in a market down the road. It's true: even if the schedules are the same; even if the lead-in audience is of the same size; even if the station's average ratings are the same - the audience size may be very different in different markets. If you don't believe that, compare the Nova, Mystery or Masterpiece Theater ratings in Boston and Fresno. So looking at markets with differing appeal for the same program should shed more light on our theory. If our theory about the relationship between audience size and pledge is true, stations that get large ratings for a particular program should get more pledges than stations that get small audiences for that same program. Simple enough??
During August pledge we looked at eight markets for two of the more popular August shows: The Three Tenors and In Search of Angels. The data is presented in two graphs, one for each program.
In each graph, each marker represents a telecast. The vertical axis is ratings; the horizontal one is pledges per million homes (to control for market size). Relationships in these graphs are measured by how closely the markers cluster around a "line of best fit". A widely scattered set of markers shows a weak relationship; a tightly clustered set of markers shows a strong relationship.
In both Three Tenors and Angels' graphs, the markets cling close to the line of best fit. Thus, we see a strong relationship between audience size and total pledges.
To be specific, the correlation for The Three Tenors was .73. For Angels, it was .74. Stations that had larger audiences for specific programs also got more pledges for those programs. Once again: ratings and pledges are related.
Appeal Versus Affinity
The message from these studies seems relatively clear. Pledge numbers reflect audience size. This suggests, perhaps, the skill of the individuals choosing, producing and scheduling pledge programs is just as important as that of the development staff pledging those programs.
Undoubtedly, there are programs that generate high ratings that miss the mark as fund-raisers because they lack pledge appeal. They may be improperly paced. They may be sequenced and cut ineptly, so that they do not deliver a highly-charged, sympathetic audience to the break. They may lack a message, a call to action or simply, emotional resonance. These problems should be eliminated by sophisticated, pledge-focused producers and editors, so the local station's development staff can take full advantage of the potential pledge benefits accrued by a large viewing audience.
Obviously, if pledge numbers reflect audience size, local programmers have a role in the success of a pledge drive, ensuring that the local pledge schedule has popular appeal. To that end, arguments ensue whether a fund-raising schedule should aim for flow or turnover.
A pledge schedule with flow would ideally increase the time a viewer watches - thus effecting a rise in viewing frequency and GRPs. The flow proponents say that, at least, flow ensures an audience for the second program. What's more, that audience has hopefully been "softened" by repeated pledge requests... suffering a bit more guilt for not pledging because, after all, they have elected to stay with the station for a substantial period.
The turnover proponents retort that after one audience has been thoroughly pledged, it's wiser to move on to a totally new audience to harvest - assuming, of course, that a totally new audience can be gathered after the original one has been sent packing. . . . The fans of pledge turnover are usually looking for enhanced cumes - they seek to reach every possible viewer in the viewing population at least once to ensure that no one misses the station's fund-raising message. The question of flow vs. turnover is currently being studied. We have accumulated some data that suggest that pledge schedules can be improved by applying a new concept - affinity programming - to maximize audience and pledge numbers while retaining the benefits of both flow and turnover.
A quick definition of affinity can be gained by a simple exercise: On the left side of a sheet of paper, list 5 public television programs you like and watch whenever possible. On the right side of the paper, write the names of 5 public television programs you wouldn't watch unless tied to your chair. (And then you might close your eyes. Go ahead. . . tell the truth. Not even a rabid public television professional likes them all. Honest.)
Bingo! For you, the programs on the left side of the paper have affinity. The programs may be quite different in subject; the list probably contains totally different program types. But there are elements in those diverse programs that appeal to you whether you like science, drama, documentary or how-to's. Something within those programs draws them together, pulls you to them and gets your viewing. In public radio, affinity studies show that people who like All Things Considered like Car Talk. This isn't a connection that would have been assumed without research.
Simplistically, sometimes we think that scheduling two science programs causes flow. Or, we reason, if two programs that typically skew 65+ female follow one another, flow will automatically occur. Agreed, these tactics are more likely to produce flow than if very dissimilar demographic or interest programs are scheduled consecutively: e.g., NOVA following Championship Skating or, for that matter, Masterpiece Theater following Nature. But our affinity studies suggest that there are variables other than demographics or program type that determine ultimate audience flow.
It is absolutely true that Lawrence Welk and MacNeil Lehrer both have a high proportion of 65+ females in their audiences. But research shows that it is highly improbable that the white-haired elder tapping her foot to the tune of Mr. Bubbles is the same one solemnly listening to the soothing drone of the Dynamic Duo. It is also true that the "hard" science viewer who dotes on programs like The Creation of the Universe may not be totally enthusiastic about a NOVA program concerning the demise of the rainforests. In a recently completed study, we discovered that it was possible for more affinity to exist between some nature and how-to programs than between two how-to or two nature programs. We discovered that while "affinity" among program preferences may be a very personal quirk for an individual viewer, there are often universalities: our correlation studies show that there are preference profiles - combinations of individual program preferences that are shared by many viewers. Some affinities are rather obvious; others are a bit more obscure.
Would it surprise you to learn that 57 percent of the Great Moments/NOVA audience also watched the Benny Goodman pledge special, whereas only 24 percent of the MacNeil Lehrer audience did? That 33 percent of the Barney and Friends audience watched I'll Fly Away? That 50 percent of the Mormon Tabernacle Choir Special watched Straight Talk on Menopause? We would have suspected that the Three Tenors audience and the Wealthy Barber Marathon audiences would be very similar, but in fact, only 35 percent of the Wealthy Barber viewers watched The Three Tenors. . . .
Ideally, if we understood affinity, we could arrange pledge schedules that varied in type from program to program to avoid interest "burnout" ("Oh, yuck, another nature show!"), yet retained the viewing appeal that would keep some households in front of a public television program and station for a whole evening.
Once affinity is more clearly understood, programmers will become more skillful at putting together pledge schedules that infuse topical diversity of consecutive programs, thus attracting new fresh viewers for cume, while retaining the interest of a predictable portion of the last program's audience to increase frequency.
Introduction
So What?? Back to the alarm bell. Pledge is upon us. And that's not necessarily a reason to complain.
Pledge does important things for the station and the system. We are getting "handles" on elements that affect the success of pledge drives. We are learning more about what appeals affect a pledge audience; how specific pitches, premiums and talent influences different demographic groups and different program audiences; how affinity programming can be used to enhance both audience and pledge results. And as we learn more, pledge drives can become more efficient and more palatable.
But as we manipulate mechanics, talent, premiums and appeals, we must remember the basic truth: Stronger programs get more pledges. Stronger programs, i.e., programs with large, broad audiences have more potential than programs with limited, narrow appeal. And the stronger programs - when produced as pledge vehicles with high pledge appeal, emotional resonance and the capability of delivering an interested, sympathetic and keyed-up audience to the break - will do best of all.
The ultimate success of a pledge drive seems to rest on the co-operative talents of a clever programmer, a cunning development director, some canny producers and program buyers and an appreciative audience. Better programs, cleverly scheduled, and well pledged. Something to think about. . . .
Dr. Judith LeRoy and Dr. David LeRoy are co-directors of PMN TRAC in Tucson, Arizona.
CPB funded this report. Opinions expressed are the author's and do not necessarily reflect the opinions and policies of the Corporation.
