Replicating impression patterns in tweet schedules

If you do something once, you look for patterns and try to figure out if it’s possible to use those patterns to repeat the experiment. 

When that works and you get a repeated pattern, you’re onto something.

Below is a graphic of Twitter impressions for a series of tweets I scheduled yesterday and this morning. The schedule was based on an experiment I did with multiple tweets on one topic last week (which some of you read about; thanks to everyone who plugged this post last week and again earlier this week). This schedule added the early-evening tweet (5 p.m.) as a bridge in between the noon and late-night tweets, and had a next-day tweet scheduled for 7:30 a.m. in an attemp to capitalize on the early-morning rebound I observed for the late-night tweet in last week’s test.

This data shows that the tweets performed almost exactly how I had expected them to perform given last week’s results; the basic pattern is the same. The biggest first-hour impressions came in the noon and 10 p.m. tweet, and the mid-evening tweet underperformed both of those overall and died pretty fast but still generated good impressions in the first two hours. It also rebounded slightly the next morning, which I didn’t expect.

The morning tweet was scheduled at 7:30 a.m., so its first hour seems to underperform — but it pulled 159 impressions in 30 minutes. Had I scheduled that tweet at 7 a.m., it seems likely it could’ve performed better than the first hour of the 5 p.m. tweet.

However, what’s notable about the morning tweet is the huge 8 a.m. hour – it pulled 181 impressions in its second hour, which was far and away better than the first three in the schedule. 

I’m torn between scheduling the morning tweet at 7 a.m. or 8 a.m. for the next one of these experiments; I think I could get a bigger first hour by scheduling it at 8 a.m than I could at 7 a.m., but there’s clearly an audience in the 7 a.m. hour that’s not being reached by the previous day’s tweets. I’ll have to try going to 7 a.m. first and then moving it to 8 a.m. and seeing what happens.

Overall, though, this definitely reinforces the need for a multi-tweet strategy for news you want to make sure people see. That first tweet at noon yesterday pulled 900 impressions and completely died after 15 hours. Adding the three reminders should more than triple the number of impressions for the message by the time today’s morning tweet dies out, and that signal boost is exactly what I was going for.

Fun stuff.

As a bonus, I also have been pulling engagement stats from Twitter’s analytics on each of these tweets — they show data for link clicks, favorites, replies, detail expands, etc., for each — and once there’s more information on today’s morning tweet I’ll talk about those as well. Early indications are that the morning tweet is super important for overall engagement; but, again, I’ll have more on that later when more numbers are in.

Using analytics to guide a multi-post strategy on Twitter

Over the summer I began experimenting with using Sprout Social to schedule repeat posts of tweets. In the past, Twitter for Bemidji State was a fire-and-forget type of operation; we’d have a story, we’d tweet about it when we released the story, and that’d be the end of it. 

That always intuitively felt like a mistake; after all, for that to be effective relies on a couple of things that simply cannot be true. We were assuming the entire audience that we hoped would see the tweet was:
• …on Twitter when we sent it
• …paying attention to the BSU tweets in their timeline when we sent it
• …would catch up on tweets they missed if they popped in a couple of hours later to catch up on Twitter

Twitter’s new analytics data, which is now available to the masses, not only takes the guesswork out of this, it helps prove that none of those assumptions are true and reinforces the necessity of multiple tweets for key messages.

Using data from Twitter, I put together a Google Sheets analysis of three tweets I sent yesterday — all three identical, about BSU’s position in this year’s U.S. News & World Report college rankings. The first was sent at 10 a.m., the second at 3 p.m. and the third at 10 p.m.

The 10 a.m. tweet had 751 impressions, with 603 (80 percent) in the first four hours. The 3 p.m. tweet had 1,442 impressions, with 1,087 (75 percent) in the first four hours. The final tweet at 10 p.m. had 861 impressions, with 763 (88 percent) in the first four hours.

Analyzing this data leads to some interesting observations:
• The 10 a.m. tweet  was the least-viewed of the three, but it took 13 hours for it to get to the point that it was getting less than five impressions an hour.
• The 3 p.m. tweet pulled much more traffic — it pulled 520 impressions in its first hour and had more impressions in its first four hours than the other two will get in total. It didn’t die as quickly as the other two, though — it pulled 244 impressions in the three-hour block from hours 4-7 after it was posted, while the first tweet had only 52 and the third had only 19. 
• The 10 p.m. tweet had huge initial traffic — 553 impressions in the first hour — and then tailed off quickly. However, unlike the first two tweets it picked back up again this morning, gaining 70 impressions between 6-9 a.m., or hours 9-11 after it was posted. The first tweet had 37 impressions in hours 9-11 and the second had 40.

I will have to do this more often with more tweets that are scheduled on a repeating basis to see if these patterns hold true. If they do, here are the adjustments I might make:

• Start the chain at 9 a.m. (when possible) to see if that will lead to a faster start for the first tweet
• Continue to schedule the second tweet five hours after the first tweet to see if there’s a similar mid-afternoon bump in traffic.
• Move the third tweet up an hour to 9 p.m. and see if that leads to either a bigger initial hour or a bigger number of impressions in the first four hours
• Add a 7 a.m. tweet the next morning to catch some of the rebound traffic that’s obviously coming in on the tail of the late-night tweet.
• Also, consider the possibility of adding a mid-evening tweet in between the 3 p.m./10 p.m. tweets and see what its impressions are like to take advantage of the fact that the 3 p.m. tweet did so well in hours 4-6 compared to the other two tweets. There’s clearly still an audience there.

I’m suddenly completely enthralled by all of this. I will share more as I learn more.

#BartlettMetrics for August

Yesterday, I updated the #BartlettMetrics data I compile at/near the beginning of each month of follower statistics for the seven state universities in the Minnesota Colleges and Universities system.

I first started gathering this data back in August of 2011, and have been diligent about updating it monthly for about the last year and a half. I know all of the arguments against using follower volumes as a true measure of social media influence, but it’s an easy statistic to measure and there are trends in the numbers that are interesting to watch.

But it doesn’t tell the whole story. I’ve wanted to find some way to start measuring some actual engagement numbers — likes/comments/shares/etc. on Facebook and favorites/retweets/@-mentions on Twitter, etc. Part of the reason I haven’t started doing this is simply that I haven’t put the effort into finding tools to do it. There are plenty of excellent things out in the world to measure your own social media efforts – including things like Sprout Social and Crowdbooster, both of which I have in my toolbox for BSU – but the things for measuring the efforts of others have always felt less robust.

Sprout Social will do some basic comparisons of competitors’ Twitter accounts, but it only has four data points — percentage of conversations between new/existing contacts, a percentage measure of “influence” that it doesn’t really explain, raw number of mentions (and no explanation for what constitutes a “mention”) and number of followers gained. You can export a daily comparison of mentions, which is getting closer but not without more information about what constitutes a mention. And it does nothing for Facebook comparisons.

Crowdbooster provides nothing like this at all.

So, on to other resources.

Simply Measured has some neat free tools, but they only measure in two-week increments backward from the day you run the reports; so assembling a month’s worth of data would have to mean that I’d need to schedule specific times to generate these reports every two weeks so I could have four weeks of data — and I’d never be able to do comparisons sortof-monthly as I do with the follower stats. Paying for Simply Measured isn’t an option; its “cheap” tier is $500 a month. I wish this would work, because SM’s Facebook engagement comparison is pretty damn cool.

HootSuite has some reports that look like they may get to the neighborhood of what I’m looking for, but they’re teased in the free version and then paywalled behind either their pro or enterprise pay levels. Awesomely, there’s of course no pricing information for the enterprise level – and I’m not interested in more sales calls.

There are other things that I’ve run across that I won’t even mention, because they don’t do what I want them to do either. All of this screams “learn to code, learn the Facebook and Twitter APIs and just build something.” It’s probably getting to the point that I’ll feel like I need to do just that.

“10 books that have stayed with you”

A couple of people have tagged me in the “10 books that have stayed with you” challenge that has been floating around on Facebook. The challenge is to list 10 books that resonated with you somehow; not necessarily the “best” or anything, just books that have stayed with you.

I replied to the first person who issued me this challenge that I wasn’t sure I could do it — while I’ve certainly read plenty of books, the influential things in my life have tended to be movies or television shows. I was a voracious reader as a kid, but am on something akin to a novel-every-two-plus-years pace these days (in that I start a novel, read a chapter every few weeks, and finish it two and a half years later). So I genuinely wasn’t sure I could do this. 

But tonight Melissa tagged me in her list too, and when your wife throws down the gauntlet you’ve gotta pick it up. So here’s my crack at this. In no particular order (with Amazon links):

• “Dune” by Frank Herbert
• “Armor” by John Steakley
• “Battlefield Earth: A Saga of the Year 3000” by L. Ron Hubbard
• “Red Storm Rising” by Tom Clancy
• “Imagica” by Clive Barker
• “Rework” by Jason Fried and David Heinemeir Hansson
• “It Will Be Exhilarating” by Dan Provost, Tom Gerhardt and Clay Shirky
• “Watchmen” by Alan Moore and Dave Gibbons
• “Hitchhiker’s Guide to the Galaxy” by Douglas Adams
• “Farenheit 451” by Ray Bradbury

What I’m Playing
I picked up Velocity 2X yesterday; it’s this month’s free PS4 game on Playstation Network. I played for about an hour last night and got through maybe 11 of the game’s 50 levels. It’s not a challenging game by any means, and the intent is that you replay levels repeatedly in an attempt to pick up every item in a particular level in the fastest possible time — thus velocity. It’s clever in that it is a mashup of side-scrolling platformer and top-down shooter, to the extent that both genres are available in a single level in many circumstances (you start as top-down shooter, “dock” in a certain area of the level so you can find a switch that you need to open a gated area, and once inside the dock you change over to side-scroller so you can run to the switch). It’s clever. But the game’s challenge comes not from being able to finish a level (during my time playing, the levels were so easy that not completing one didn’t even feel like an option — granted, I am only about 20 percent of the way through the game), but to finish the level perfectly (by snagging all of the level’s collectables and reaching a certain score threshold) while at the same time finishing ahead of a time limit. However, once you’ve beaten a level there isn’t much motivation to replay it to beat the time limit unless you want the achievements; the “XP” you get as currency for reaching certain performance thresholds in each level is used to un-gate future levels, so it may well be that by the time those gates take hold, you’ll have to farm previously-completed levels to get enough XP to unlock new stages. Which, honestly, might just make me quit playing. 

It’s fun. I’m glad it was free. I’m not sure I’ll ever beat all 50 levels, though. 

Fun stuff from social media today
• Bungie put out a rather awesome live-action trailer for Destiny; check it out here. Bring on Tuesday…
• Ikea put out a brilliant ad for its 2015 catalog, “bookbook”. Watch it on YouTube

Post-mortem: “InFamous: First Light”

Back in May, I wrote about how much I was enjoying the PS4 game “InFamous: Second Son”; at that time I had just completed a first playthrough on normal difficulty and was in the midst of a second playthrough at a higher difficulty. About a month later I completed that second playthrough and in June this became the first game that I had earned a platinum trophy achievement in for completing all of the game’s other trophies.

I talked about how much I enjoyed the game back in May, and the fact that I eventually finished off every trophy available in the game was a testament to that. So when I heard that developer SuckerPunch was developing an expansion (which, really, was inevitable) called “First Light” focused on the Abigail “Fetch” Walker character, I was excited; Fetch was a strong supporting character in the first game, and she was the source of the game’s neon-fuelled powers, which were fun to play. All seemed to be in place for a solid

I completed First Light last week, and I’ve been waiting for a bit to post about it simply because I initially wasn’t sure what to think about it. Here’s my post-mortem on the game:

What I liked
• The game seemed to use better character models than Second Son; Augustine, the first game’s main villain who was relegated to sidebar status in First Light, certainly looked better.
• Fetch’s neon abilities were significantly more powerful than Delsin’s version in Second Son. There were some nice upgrades to the sniper abilities, and Fetch was a much more capable melee fighter than Delsin was in any of the original game’s four power configurations.
• The clouds to boost Fetch’s speed when sprinting were super-fun; as if the neon sprints weren’t one of the best parts about the game already.
• The lumen race side-mission layer was fun. Most of the races were trivially easy, but it was a neat addition to the game that I enjoyed.
• Fetch’s grafitti tagging side-missions were good, and I liked that those were very limited in number; there were too many of these in the first game.
• Only a couple of the lumen-collecting jumps were difficult, but those few that were took many attempts. It was a nice victory to complete all of those.
• The sniper’s nest missions were super-fun; there should’ve been more of those.

What I didn’t like
• The game was very short. Had I been so inclined I could’ve run through the game from start to finish in a solid night of play (as it was I sat down to play the game maybe four times in total, start to finish); it left me wanting much more.
• As with the first game there’s essentially zero penalty for failing at anything. You just roll back to a checkpoint and start again. And as with the first game, that eventually served to encourage me to play sloppily.
• There wasn’t a branching storyline depending on whether you made hero or villain choices, because there were no such choices. That created even less of an incentive to continue with the game after the first play-through.
• There were no boss fights. The boss fights in Second Son were some of the most challenging aspects of the game, and I missed those more-significant one-on-one fights in the expansion.
• The game world was limited to one of the first game’s two zones, and it was the least-interesting of the two zones visually. Given the game’s neon focus, I would’ve liked to have spent time in the virtual Seattle’s Lantern District.
• I wish there would’ve been more police drones to hunt down; those weren’t difficult but they were fun to do, and even doubling the amount of them in the game would’ve been welcome.

Most disappointing to me was the fact that First Light moved away from the first game’s trophy system — where all of the game’s trophies could be achieved while simply playing through the main storyline and running the side-missions — and added “virtual training” missions that you essentially have to farm repeatedly in order to attain the scores necessary to earn the trophies. I really do not enjoy this kind of content, and I definitely do not like playing this content repeatedly simply to achieve a higher score (and, to be honest, the old-school ‘80s gamer in me weeps at this revelation about my current gaming habits). So as I did with Batman: Arkham Asylum and Batman: Arkham City, which had similar content making up a significant portion of the games’ respective trophies, I essentially skipped this content — and, as a result, am going to end up skipping about two-thirds of this game’s trophies. It would’ve been nice to have platinums on both the base game and the followup, but I’m simply not interested in grinding those training missions.

In short, while I wish the trophy system was more in line with the first game, I wish the main campaign had taken longer, and I wish the main campaign took longer to complete, in all I would say I liked First Light. Fetch is a strong character and as I mentioned earlier the neon power tree was one of the best things about the first game. Given what I know about it having played it, I’m not sure I would’ve spent $15 on it again though.