Categories
Analytics Content Strategy Quinnipiac Social Media Class

Quinnipiac Assignment 10 – ICM 527 – Program Evaluation

Quinnipiac Assignment 10 – ICM 527 – Program Evaluation

Program Evaluation

This week’s readings were about evaluating a strategic plan and program.

Key Concepts

As Smith said, on (Page 331), “Program evaluation is the systematic measurement of the outcomes of a project, program or campaign based on the extent to which stated objectives are achieved.”

With a plan in place and measurable, clear objectives included in it, the next question is whether anything is working. This comes from figuring out how to measure results and what’s ‘good’ or at least adequate. In Module 8, we studied Cans Get You Cooking, where the idea was to increase awareness of cans’ use in cooking via cooking shows and blogs. However, another objective was increased sales (after all, why bother with such a campaign if sales don’t increase?), and in that respect the plan was unsuccessful. According to Companies and Markets, the purchase of canned goods declines because of improvements in the economy. When consumers have more discretionary income to spend on foodstuffs, they purchase fewer canned goods – no matter how well-crafted a campaign is. There was increased awareness, yes, and under that criterion, the campaign worked. But under the criterion of increased sales, it did not. It seemed a little as if the goalposts were moved in that campaign, that increased sales were seen as being a less attainable goal. Awareness was a far more readily attainable goal, and so awareness was presented as being the premise behind the campaign.  

These moved goalposts are the difference between what Smith refers to as awareness and action objectives, on pages 332 – 335, with the third type of objective, acceptance, straddling a line between both of the others. For the Cans Get You Cooking campaign, it seems as if the attainment of the awareness objective was the only cause for celebration.

Smith makes a compelling case on page 334, that creativity, effort, and cost don’t count as measures of effectiveness. All of those facets of a campaign are on the side of the organization, but measures of awareness, acceptance, and action are all effects felt (and acted upon) by publics. By definition, creativity, etc. should not be seen as having anything to do with the effectiveness of a campaign.

The Eight-Step AMEC Social Media Measurement Process

Jeffrey (Page 4) outlines, “The Eight-Step Social Media Measurement Process

  1. Identify organizational and departmental goals.
  2. Research stakeholders for each and prioritize.
  3. Set specific objectives for each prioritized stakeholder group.
  4. Set social media Key Performance Indicators (KPIs) against each stakeholder objective.
  5. Choose tools and benchmark (using the AMEC Matrix).
    • Public Relations Activity
    • Intermediary Effects
    • Target Audience Effects
  6. Analyze the results and compare to costs.
  7. Present to management.
  8. Measure continuously and improve performance.”
Quinnipiac Assignment 10 - ICM 527 - Program Evaluation
Avinash Kaushik, author of Web Analytics 2.0

This process compares favorably to methodologies learned in ICM 524 – Social Media Analytics. In that class, we read Web Analytics 2.0 by Avinash Kaushik. On pages 29 – 32, Kaushik outlined his Step 3 – Identifying Your Web Analytics Soul Mate (How to Run an Effective Tool Pilot) (average time: 2 years) Evaluate the following –

  • Usability
  • Functionality
  • Technical
  • Response
  • Total cost of ownership

Also –

  • Get enough time
  • Be fair
  • Ask about data sampling
  • Segment like crazy
  • Ask about search analytics
  • Test site content grouping
  • Bring on the interns (or the VPs!)
  • Test support quality
  • Reconcile the numbers (they won’t add up, but it’s fun!)
  • Check the daily/normal stuff
  • Sweat the TCO (total cost of ownership)

What Kaushik said, and what Jeffrey said, are similar. Measurement is an objective activity. This is why objectives need to be clear and measurable. Five percent is measurable; better exposure (in general) is not.

For both authors, the idea is to have specific objectives and then act on them, whether those objectives are to launch a strategic campaign or select a web analytics vendor. Then, once the vendor is chosen, get the yardstick in place, and use it. Kaushik further reminds us that, while our intention may be to select a vendor and essentially ‘marry’ it, we still need to be evaluating the evaluator. If it’s not performing up to our reasonable specifications, then it’s time for vendor divorce court.

Key Performance Indicators (KPIs)

On page 7 of Jeffrey, it says, “Shel Holtz, principal of Holtz Communication + Technology (www.holtz.com) defined a KPI as a ‘quantifiable measurement, agreed to beforehand, that reflect the critical success factors of one’s effort.’”

This puts KPIs on a par with what we have been referring to as objectives. Wanting to ‘get better’ is one thing. But it’s vague and subject to weaseling. Wanting to improve recognition of the Institute for Life Sciences Collaboration  (ILSC ) and its missions by 5% as is measured by surveys taken during the second quarter of 2016 is a measurable key performance indicator. Anyone who can read numbers will be able to determine whether the KPI has been met.

Applicability to the ILSC

Beyond just recognition measurements, there are any numbers of KPIs which can be measured, including the number of schools served by the Small World Initiative by a certain date, or increasing donations by a particular amount, subject to a clear deadline.

Currently, the ILSC website in particular seems to be just sort of thrown together without any sense of how to deal with technological and design changes, or scalability. Keeping measurements out of the mix means that the ILSC website can be tossed up and then forgotten about – and it seems a lot like that’s exactly what happened. However, a website cannot be a flash in the pan, as that can cause the publics to feel the organization behind it is also fly by night. Particularly when asking for money, an organization needs to give forth the impression of trustworthiness and solidity.

Adding Key Performance Indicators and measurements means there needs to be a sea change in how the ILSC views the website. It isn’t just something thrown together in an afternoon, to be handled by some temp hired for a few weeks and then never seen again. Instead, it needs to be an integral part of the organization. While the organization’s work is (generally) offline, there still needs to be room for the website in the minds of the organization’s board members. One facet of their thinking has to include how to best utilize the website and social media, in order to better communication the ILSC’s mission and goals, and to communicate with its publics. The website has got to have a place in those conversations, and it currently does not. That has to change.

Categories
Analytics Quinnipiac

Quinnipiac Assignment #04 – Media Convergence

Quinnipiac Assignment #04 – Media Convergence

Once again, we did not have to prepare a YouTube video. Therefore, instead, I am going to reprint one of my essays, in its entirety. This one is about media convergence. As media (print, television, Internet, etc.) all becomes deliverable on one piece of hardware (generally a smartphone or an iPad), and one is advertised or copied or shared on another, should our metrics and means of measuring reach, etc. on these platforms also diverge? And what does that mean for the future of measurement?


Media Convergence

Wishing and Hoping AKA The Past

Back even before television was three channels, data was gathered via Nielsen ratings. Nielsen started in the 1920s but didn’t really get into media analysis until the 1942 radio index. In 1950, it was followed by the television index. (Nielsen, 90 Years, http://sites.nielsen.com/90years/). By 2000, Nielsen had gotten into measuring Internet usage.

Throughout most of this nine-decade period, media was siloed. Radio was analyzed one way, television another, etc. But it was mainly counting. How many people watched a show? How many listened to a particular radio station? With 1987’s People Meter (Nielsen, 25 Years of the People Meter, http://www.nielsen.com/us/en/newswire/2012/celebrating-25-years-of-the-nielsen-people-meter.html), an effort was made to gather more granular data, and to gather it more rapidly. However, Nielsen’s efforts were still confined to extrapolated samples. Was their sampling correct? In 1992, the People Meter was used for the first time in an attempt to measure Hispanic viewing habits. But even in 2012, the total number of people meters in use was in a mere 20,000 households. Were the samples representative? It’s hard to say.

Here and Now AKA It’s Better, But ….

Social media qualitative measurements, including sentiment analysis, are an effort to understand viewer, user, and listener behaviors. Nielsen and the like measure quantifiable information such as time on a channel (or page). But qualitative measurement goes beyond that, in an effort to understand why people visit a website. Topsy, for example, measures the number of positive and negative mentions of a site, product, service, celebrity, etc. Yet a lot of this is still quantitative data. Consider Martha Stewart as a topic of online conversation.

Adventures in Career Changing | Janet Gershen-Siegel | Quinnipiac Assignment 04 – ICM 524 Media Convergence
Martha Stewart on Topsy, June 9, 2014

 

All we can see are numbers, really (the spike was on the day that a tweet emerged claiming that Martha Stewart had a drone). This is still counting. There are no insights into why that tweet resonated more than others.

Media convergence is mashing everything together in ways that audiences probably didn’t think were possible even a scant thirty years ago. But now, we watch our television shows online, we are encouraged to tweet to our favorite radio stations, our YouTube videos become part of television advertising, and our Tumblr images are being slipped into online newspapers. All of this and more can be seen on our iPads. Add a phone to this (or just use an iPhone or an Android phone instead of an iPad), and you’ve got nearly everything bundled together. How is this changing analytics? For one thing, what is it that we are measuring? When we see a music video on YouTube, are we measuring viewer sentiments about the sounds or the images? When we measure a television program’s Facebook engagement, is it directly related to the programming, to the channel, to viewer sentiment about the actors or the writers, or something else? What does it mean to like or +one anything anymore, when a lot of people seem to reflexively vote up their friends’ shared content?

I believe that our analysis has got to converge as our media and our devices converge. After all, what is the online experience these days? On any given day, a person might use their iPad to look up a restaurant on Yelp, get directions on Google Maps, view the menu on the restaurant’s own website, check in via FourSquare, take a picture of their plate and upload it to Instagram, and even share their dining experience via a Facebook photo album, a short Vine video or a few quick tweets. If the restaurant gets some of that person’s friends as new customers, where did they come from? The review on Yelp? The check in via FourSquare? The Vine video? The Facebook album? The tweets? The Instagram image? Or was it some combination thereof?

Avinash Kaushik talks about multitouch campaign attribution analysis (Avinash Kaushik, Web Analytics 2.0, Pages 358 – 368), whereby customers might receive messages about a site, product, service, etc. from any number of different sources. On Page 358, he writes, “During the visits leading up to the conversion, the customer was likely exposed to many advertisements from your company, such as a banner ad or Affiliate promotion. Or the customer may have been contacted via marketing promotions, such as an email campaign. In the industry, each exposure is considered a touch by the company. If a customer is touched multiple times before converting, you get a multitouch conversion.” Kaushik reveals that measuring which message caused a conversion is an extraordinarily difficult thing to do.

With media convergence, the number of touches in a campaign can begin to come together. Facebook likes can be measured for all channels. Tweets can be counted for whichever messages are being sent on Twitter, whatever they are about. Will attribution be any easier? Hard to say, but if the number of channels continues to collapse into one, will it matter quite so much in the future?