Darrick: We are a part of a lot of different groups and organizations. What does a "Meaningful Community" mean or look like to you?
Venia: For me there is a progression in your community's relationship with your members - A "useful" community provides value to its members. A "successful" community accomplishes the goals and mission of that community by imparting "value" to its members. An "engaged" community encourages relationships to grow beyond the value a member initially desired and encourages them to give back. The #1 metric I use to determine how "meaningful" a community is, though, is self-disclosure. A member's willingness to disclose more about themselves shows a genuine affinity with other members. A meaningful community is exemplified in those moments when a person's relationship with others in the community transcends the community's purpose or the value-added engagement that keeps them interacting. A meaningful community creates a real feeling of affiliation that proves to a member that this is where they want to spend their time. The #1 metric I use to determine how "meaningful" a community is self-disclosure. ![]()
Darrick: How would you describe your role when it comes to the communities you work with?
Venia: As an online community manager and full-stack marketer, I usually work with brand communities and generally oversee a somewhat tenuous relationship between the community and its leadership. Many communities are viewed similarly to a beehive. Brands and more powerful stakeholders in a community can approach their community like a bear getting the honey, or a flower feeding it nectar. It's my job to tell a brand or executive team when they are being the bear, when they're being the flower, and when that's okay. \ Sometimes my job is to "speak for the bees" in their executive meetings, so the community has a voice. At other times it's to report cold facts on whether the community is providing real value to the people paying for it. I have to navigate my positions of authority and subservience to both the brand and community. Because of this tenuous relationship, my job primarily involves navigating misconceptions about how the community really works, using analytics and social science, and that's why my co-founder and I created SociallyConstructed.Online.
Darrick: What gives you joy when working with communities?
Venia: It's been my Raison D'etre (pretentious but true), to enter any community and know that I've left it better from my presence; to know I've improved people's lives somehow. I know it may be weird to think, but analytics has a special place in that. I love the social-scientific aspect involved in being a community manager. I really like the notion of measuring a community's health, reporting it to those in charge, and seeing those people implement one thing that will change that community for the better. Sometimes my job is to "speak for the bees" in executive meetings, so the community has a voice. Other times it's to report cold facts on whether the community is providing real value to the people paying for it.
Darrick: What are your biggest struggles when working with communities?
People like to think that because they spend every waking moment in their community, they know its pain points and how to improve it, but more often than not, once you get above a "tribe" or around ~250 people, they're wrong. I would say many people in their communities use the lived experience of their time in a community to make decisions rather than the learned experience of their community members. They see problems and advocate for them without determining the nature of that problem from other people's points of view. And this makes sense. As a community member you've become so fond of something, and you've gained enough clout to be considered a veteran or expert. It puts the blinders on so-to-speak. This further underscores the importance of processes, though. As community leaders, we have a responsibility to gauge and report out the metrics on our community's health - not just because everyone deserves to know, but because it's a social contract that keeps our decisions in check and makes it easier to figure out when we're the bear.
Darrick: Any last words or thoughts when it comes to meaningful communities?
Venia: Well that sounded a bit ominous. I guess I would say community is a social construct. Online spaces, especially communities, are built out of the very communication they facilitate. That means putting together a community charter of transparent practices and measuring your community success. You don't need to develop infrastructure for the future, but you need to have the infrastructure necessary to measure what's happening in your community today. You need to learn to listen. That's (not) all I wrote!
After this interview I also spent a good period of time reflecting on what exactly Darrick meant by "meaningful" in his conversation so I reflected a little more on my first question's response. I think a lot came out of it. Click below to read it and join me in the community catalysts group if you want to reflect more on this :)
0 Comments
Here is Week 2: Launching the SCMS in Airtable!
This blog has been cross-posted from Ria's blog with her permission... The social bonding period continues
I had a meeting with all of my four mentors on Friday, 15 May’20. The details are here.
We largely discussed the past 2 weeks' progress and understanding of SCMS (Social Currency Metric System) in more detail. It was similar to a training to understand the importance of qualitative data over quantitative. The main agenda of the training was “Why qualitative data is rejected in business, and how reframing its collection using the SCMS makes it useful in businesses?” It was a very informative presentation delivered by Samantha and Dylan.
For the next meeting, we’ll be discussing the concrete implementation of the project both technically and theoretically. For this, I have some implementation ideas written in my Project Proposal, and I had a meeting with Valerio to discuss the pros and cons of different approaches. One approach is to use the already present enriched index, and the other is to create new ad-hoc indexes. What I did this past week
The plan for next week![]()
Next week plan is to advance the implementation of elk and include github issue and comments. Along with this, try to implement a method in which we can break customer reviews into 2/more sentiments without bringing incoherence or context break. This will involve checking NLTKs implementation and understanding MaxQDAs approach to such situations.
Don't miss out on more of Ria's Journey over the next several weeks! We have a series of these blogs throughout the process and you can start with the previous one here:
We’re hypocrites. We can admit it.
We just spent an inordinate amount of time across the previous 2 blogs touting the huge success of a 2-question survey, the Net Promoter Score (NPS). For 2 weeks we’ve covered how and why the Net Promoter Score was one of the single best systems to start measuring your community audiences... and here we are now, telling you the NPS survey is, although useful, fundamentally flawed. What gives? Here’s the truth: Social scientists and businesses use surveys
Don’t get us wrong. Survey systems like the Net Promoter Score, Customer Satisfaction, and Sense of Community have been used for a long time in business, to great success.
But the reason most CEOs and Data Analysts just take a glance at the graphs, rip percentages out of context, and abandon them in their Survey Monkey account until next year, is because there are some fundamental problems with how businesses view the almighty survey. In this blog, we’re going to go over 3 pitfalls to survey production, delivery, and analysis that have caused the average Marketer and CEO to distrust their community’s responses.
But…
To prove we’re no negative Nancy and to stand by our promotion of the Net Promoter Score in the past two blogs, we’re also going to provide you grounded and simple solutions to avoid, fix, or altogether improve your survey implementations and convince your higher-ups to trust your respondents’ feedback. Here we go! Problem 1: There's too much detailed data |
|
To abstract data easily, port the data into a Word doc or spreadsheet and place comments on the feedback you find interesting. In that comment, use a simple 1 or 2-word phrase that encapsulates the theme of the statement. Note down what you mean by that term and then use that term each time you see similar feedback. Eventually, you'll see that theme pop out of the text frequently.
|
It's like highlighting the important ideas in a book. When a term comes up 40 times, it's probably more important than terms that turn up once or twice. Eventually, you'll have a count of themes and concepts that jump out at you as trends and patterns you can act on.
Problem 2: Only a few get to
(or even want to) speak
“The only people who will take your survey,
are people who take surveys."
This next issue has less to do with setting up the survey to succeed and more to do with the audience receiving the survey.
One of the most common issues with surveys is that they require a person to take time out of their day to do something they weren’t planning to do, and put in effort they weren’t initially anticipating.
3 different kinds of “fallacies” rear their ugly heads here and build on each other to create a nasty issue with your resulting qualitative data set.
And if you can’t sidestep these fallacies, your survey is bunk. These fallacies are the main reasons data nerds cite when they poo-poo the idea of collecting opinions via survey.
I’ll explain each before we get into ways to look out for them.
Fallacy 1: Vocal Minorities or Polarized Involvement
In general, only about 2% of any community will be labeled “power-users.” These are the people who are ALWAYS talking and always giving opinions. Usually, they’re also the ones you interact with and trust the most.
On the flip side, detractors tend to be hyper-vocal about their opinions. As the saying goes, “Negative PR is about 10 times stronger than good PR.”
And then there’s the middle.
Fence-sitters tend to be less vocal and less invested. So getting their opinion is difficult. That means you’ll get biased answers from your polarized users, and fewer from them.
Fallacy 2: Survey Fatigue or more broadly The Diminishing Value of Work
You’ve likely run into the term Survey Fatigue before, but you probably haven’t spent much time digging into the theory behind it.
The diminishing value of work refers to the initial value a respondent feels completing the survey is worth at the beginning, and how that value is impacted as they move through it. There is a certain amount of commitment required for a person to perform any action, and this occurs every time a member participates in your community.
Each survey question is an additional amount of work. As effort is put into the survey, the value of that survey may become “less worth it.” Eventually, the value of the survey and how much effort they’ve put into it is no longer justified, and they click off.
Many people view this as a survey’s length and how long the questions are, but in reality, short or long doesn’t matter. It’s about imparting enough value before, during, and after they fill out the survey, that they feel their action is still worthwhile by the last question.
Fallacy 3: The Spiral of Silence
This last fallacy is less known, but you can think of it as the ultimate consequence of letting fallacies 1 and 2 get too far out of hand.
The vocal bias skews our data to favor the involved. Fence-sitters won’t see as much value but are still important. If you make decisions based on the more vocal than over time fence-sitters lose any sense of influence they did have and begin to think their opinion, if they had provided it, wouldn’t have made a difference.
So they start to think their opinion isn’t valued or you won’t listen to them. Then they intentionally refuse to count it. As a result, their ideas aren’t heard, and their voices DO become of less value.
If this sounds a lot like a certain country’s political situation - you’re right. It’s the exact same mechanism, and it happens at every level of a community; small group to policy.
Now let’s talk about solutions.
To get around these fallacies there are a lot of tactics and fail-safes you can implement. A lot of organizations will apply extrinsic rewards like raffles and badges that have a more stable “value” to their surveys to ensure the value is viewed more fairly.
It should be no surprise as community managers at SC.O that we don’t recommend that approach. Extrinsic reward is a great way to devalue the intrinsic value of influence by way of participation. It’s got less value for the work if you give them something detached from your brand.
On top of this, the quality of the submissions you get from those who simply want the reward at the end of the survey may not give responses that match in quality with those who are driven by intrinsic motivations.
Instead we recommend making the work smaller and spread out over time by adding 1-2 question surveys like the NPS to your regular community management or social media campaigns. Then have real public conversations that credit those thinkers and use those results to perform a transparent action.
The questions will encourage “passive engagement” rather than require active commitment so the effort is lower and the conversation is viewed as valuable. It will also pull some of your “lurkers” and “fence-sitters” out of their holes if you spin the conversation toward them. Consider priming an audience before the survey with a#LoveOurLurkers campaign!
You should also make this easier on yourself!
Collect, tag and measure your community's passive comments across all your social channels in one place by implementing our Social Currency Metrics System for free! |
PROBLEM 3: MOST SURVEYS ARE noT
"SCIENce ENOUGH” TO JOIN THE SCIENCE CLUB
Let’s get elementary now..
“If your survey uses the scientific method over
the social-scientific process, you’re not collecting
your data correctly, at all."

The scientific method is predicated on the systematic manipulation of “variables".
What is the cause-effect relationship between your studied thing and your hypothesis? The idea is to control as many variables as possible and test the relationships between 1-3 unknown variables. This allows you to solidify correlations into findings and then theories.
This works great in lab environments, on problems with clearly defined answers, when different approaches have clear upsides and downsides, or with scientific principles that are the same no matter where you go.
But that’s simply not reality when you start adding people, culture, and social structures throughout the big wide world to the mix.
People are too diverse and do things for too many different reasons. Often their actions can only be defined correlationally.
And that is why the great social-scientists of the 1900s built on the scientific method with the lesser known but ridiculously impactful “social-scientific process”.
What is the cause-effect relationship between your studied thing and your hypothesis? The idea is to control as many variables as possible and test the relationships between 1-3 unknown variables. This allows you to solidify correlations into findings and then theories.
This works great in lab environments, on problems with clearly defined answers, when different approaches have clear upsides and downsides, or with scientific principles that are the same no matter where you go.
But that’s simply not reality when you start adding people, culture, and social structures throughout the big wide world to the mix.
People are too diverse and do things for too many different reasons. Often their actions can only be defined correlationally.
And that is why the great social-scientists of the 1900s built on the scientific method with the lesser known but ridiculously impactful “social-scientific process”.
The primary reason people believe that data collected from surveys is highly subjective is because the data stopped at step 3 in this larger process. You collected it, looked at it, found some cool stuff, and said, “huh, looks like this is a thing”. It doesn’t have the ability to solidify any correlations you make into clear causal effects.
The social scientific process creates objective data out of subjective data by taking the cause-effect relationship of the scientific method further; it tests the environmental factors at the same time as the variables using the rule of generalization.
For example, on the traditional scientific method your survey goes around it once:
This same process makes a really solid go at steps 1-3 of the social method, but it stops at the rule of generalization.
It doesn’t investigate the limitations of those hypotheses, it doesn’t root out fallacious conclusions, it doesn’t generalize to wider audiences, and it doesn’t test the limits of what you’ve learned so you know where the correlation ends.
If you do the survey using the social-scientific process it goes around the scientific method a full 3 times before you get “results”.
So, this social-scientific method is the reason we love the Net Promoter Score. If you implement the NPS like we taught you in our prior blogs, it will cover a full go-around of the social-scientific process as it happens over and over again.
The social scientific process creates objective data out of subjective data by taking the cause-effect relationship of the scientific method further; it tests the environmental factors at the same time as the variables using the rule of generalization.
For example, on the traditional scientific method your survey goes around it once:
- You observed trends in your community
- You wrote a survey about it
- You wrote your questions specifically to suss out those variables
- You got your results, analyzed them, and made a report
- You disseminated them to the powers that be
This same process makes a really solid go at steps 1-3 of the social method, but it stops at the rule of generalization.
It doesn’t investigate the limitations of those hypotheses, it doesn’t root out fallacious conclusions, it doesn’t generalize to wider audiences, and it doesn’t test the limits of what you’ve learned so you know where the correlation ends.
If you do the survey using the social-scientific process it goes around the scientific method a full 3 times before you get “results”.
So, this social-scientific method is the reason we love the Net Promoter Score. If you implement the NPS like we taught you in our prior blogs, it will cover a full go-around of the social-scientific process as it happens over and over again.
To Conclude
The Net Promoter Score is great because it makes active data collection passive collection, it is continually available, it only prompts for comments when people really want to provide them, and it's based on emotions rather than pre-assessed logic.
So to conclude, we don’t want to discourage you from implementing these awesome community management tools. We are not against surveys.
What we are saying is that the way you implement these community analytics tools needs to be done with these issues in mind. Each of these problems is a reason people have started to mistrust qualitative data for the past several decades.
So to conclude, we don’t want to discourage you from implementing these awesome community management tools. We are not against surveys.
What we are saying is that the way you implement these community analytics tools needs to be done with these issues in mind. Each of these problems is a reason people have started to mistrust qualitative data for the past several decades.
We aim to fix that by making qualitative data easier to collect and analyze, more objective, and harder to read falsely into by taking your use of qualitative data further with our Social Currency Metrics System.
Check out the system and how to build your own for free here, or read the previous two parts of this blog! |