SociallyConstructed.Online
  • Blog
  • SCMS
  • services
  • Contact Us
  • Blog
  • SCMS
  • services
  • Contact Us
Search

[Interview]Community Catalyst's Derrick Hildman interviewed Venia about what it means to have a meaningful community

8/12/2020

0 Comments

 
A few weeks ago Venia published a follow up blog, to an interview with Darrick Hildman who runs the community catalysts community - a community for community managers to learn and collaborate so we can solve common problems.  

This blog will be an official interview around that post.

You can read the post here, and Samantha's additional blog about what it means to make a "meaningful" community here.

The Interview

Darrick: ​Welcome to our 1st Catalyst Spotlight!
Samantha Venia Logan (Venia) will be answering some questions about community. Thank you Venia for participating.

Venia: Absolutely.
​

FB post: Darrick hildman interviews Venia for Community Catalysts
Darrick: We are a part of a lot of different groups and organizations. What does a "Meaningful Community" mean or look like to you?

Venia: For me there is a progression in your community's relationship with your members - A "useful" community provides value to its members. A "successful" community accomplishes the goals and mission of that community by imparting "value" to its members. An "engaged" community encourages relationships to grow beyond the value a member initially desired and encourages them to give back.

 The #1 metric I use to determine how "meaningful" a community is, though, is self-disclosure.
 
A member's willingness to disclose more about themselves shows a genuine affinity with other members. A meaningful community is exemplified in those moments when a person's relationship with others in the community transcends the community's purpose or the value-added engagement that keeps them interacting. A meaningful community creates a real feeling of affiliation that proves to a member that this is where they want to spend their time.
 The #1 metric I use to determine how "meaningful" a community is self-disclosure.
Communities are like a beehive. Are you the bear after the honey or the flower providing the nectar?
Darrick: How would you describe your role when it comes to the communities you work with?

​Venia: As an online community manager and full-stack marketer, I usually work with brand communities and generally oversee a somewhat tenuous relationship between the community and its leadership.

Many communities are viewed similarly to a beehive.

Brands and more powerful stakeholders in a community can approach their community like a bear getting the honey, or a flower feeding it nectar. It's my job to tell a brand or executive team when they are being the bear, when they're being the flower, and when that's okay. \

Sometimes my job is to "speak for the bees" in their executive meetings, so the community has a voice. At other times it's to report cold facts on whether the community is providing real value to the people paying for it. I have to navigate my positions of authority and subservience to both the brand and community.
​

Because of this tenuous relationship, my job primarily involves navigating misconceptions about how the community really works, using analytics and social science, and that's why my co-founder and I created SociallyConstructed.Online.



​Darrick: What gives you joy when working with communities?

Venia: It's been my Raison D'etre (pretentious but true), to enter any community and know that I've left it better from my presence; to know I've improved people's lives somehow.

I know it may be weird to think, but analytics has a special place in that.

I love the social-scientific aspect involved in being a community manager. I really like the notion of measuring a community's health, reporting it to those in charge, and seeing those people implement one thing that will change that community for the better.


Sometimes my job is to "speak for the bees" in executive meetings, so the community has a voice. Other times it's to report cold facts on whether the community is providing real value to the people paying for it.
Darrick: What are your biggest struggles when working with communities?

People like to think that because they spend every waking moment in their community, they know its pain points and how to improve it, but more often than not, once you get above a "tribe" or around ~250 people, they're wrong.

I would say many people in their communities use the lived experience of their time in a community to make decisions rather than the learned experience of their community members. They see problems and advocate for them without determining the nature of that problem from other people's points of view.

And this makes sense. As a community member you've become so fond of something, and you've gained enough clout to be considered a veteran or expert. It puts the blinders on so-to-speak.


This further underscores the importance of processes, though.

​As community leaders, we have a responsibility to gauge and report out the metrics on our community's health - not just because everyone deserves to know, but because it's a social contract that keeps our decisions in check and makes it easier to figure out when we're the bear.



Darrick: Any last words or thoughts when it comes to meaningful communities?

Venia:  Well that sounded a bit ominous. 

I guess I would say community is a social construct.

Online spaces, especially communities, are built out of the very communication they facilitate. That means putting together a community charter of transparent practices and measuring your community success. You don't need to develop infrastructure for the future, but you need to have the infrastructure necessary to measure what's happening in your community today. You need to learn to listen.

That's (not) all I wrote! 

After this interview I also spent a good period of time reflecting on what exactly Darrick meant by "meaningful" in his conversation so I reflected a little more on my first question's response.  I think a lot came out of it.  Click below to read it and join me in the community catalysts group if you want to reflect more on this :)
What is a "meaningful" community?
0 Comments

Ria's 2nd Week: The SCMS in Airtable

6/7/2020

0 Comments

 
At the beginning of the month we were accepted and paired with a talented Software Developer, Ria Gupta, to implement the Social Currency Metrics System in GrimoireLab for Google Summer of Code. 

​Last week was 
Ria's first week and in this blog series we are cross-posting Ria's personal blog detailing every step of the process!  
​
Series Blog Contents: 
Check here for all published blogs.
Announcement: Ria's Journey begins!
Week 1:  Ria's 1st week
Week 2: The SCMS in Airtable
Week 3: Preparations & Superheroes
​
Week 4: Putting Code to PIxel
​
Week 5: The SCMS Data's Alive!
Week 6: Airtable to Google Sheets
​
Week 7: Our 1st Visualizations
Here is Week 2: Launching the SCMS in Airtable!

This blog has been cross-posted from Ria's blog with her permission...

The social bonding period continues

I had a meeting with all of my four mentors on Friday, 15 May’20. The details are here.

We largely discussed the past 2 weeks' progress and understanding of SCMS (Social Currency Metric System) in more detail. It was similar to a training to understand the importance of qualitative data over quantitative. The main agenda of the training was “Why qualitative data is rejected in business, and how reframing its collection using the SCMS makes it useful in businesses?” It was a very informative presentation delivered by Samantha and Dylan. 
​
​
This was the first training of the training series which includes a total of six training sessions. Analysing trends in the data and helps in building context, unlike Quantitative data which isolates trends. For understanding this better and having a better first-hand experience, I’ll be implementing a personal SCMS system this week.
​
Picture
Ria Created Airtable Data with Amazon Data
Implementing SCMS on Twitter data of Amazon

For the next meeting, we’ll be discussing the concrete implementation of the project both technically and theoretically. For this, I have some implementation ideas written in my Project Proposal, and I had a meeting with Valerio to discuss the pros and cons of different approaches. One approach is to use the already present enriched index, and the other is to create new ad-hoc indexes.
​
exported data from the airtable.
Output data extraction with specific parameters of CHAOSS mailbox

What I did this past week

  • I implemented a working SCMS on Airtable using collecting Tweets of Amazon. Just for the initial setup, I’ve used a small database i.e around 10–15 records. You can find it here. It involved defining a Communication Trace, I had selected twitter, can be extended to include more platforms; defining a meaningful codex, Tagging data on the basis of Utility, Trust, Transparency, Consistency, Merit.

  • A pilot study towards building an Implementation Sketch was done. The repository can be seen here. For this, I created a new enricher for mbox (ScmsMboxEnricher), Changed the attributes of data present like SubjectAnalysed-> Scms_Subject_Analysed or Body_Extract -> Scms_Body_Extract. Created a new pipermail enricher inheriting from scmsmbox. Removed all data except the 5–6 attributes mentioned i.e uuid,project,project_1,grimoire-creationdate,origin, Subject_analysed and Body_extract. Executed micro-mordred to collect and enrich data from mbox. Dumped the enriched data to an ElasticSearch index. Made a script ES2Excel which will place all attributes of data received in different columns of the excel. Output CSV file.
    ​
  • Understood the interaction between Perceval and ELK and Kidash via terminal commands. Explored p2o.py which can be used to enrich the data extracted. It returns 2 data sets, raw and enriched index. Used Kidash to make a dashboard of the data present at localhost:9200.p2o was used before micro-modred and is decommissioned. Also gained some basic understanding of raw data and enriched index data.

The plan for next week

Profile Picture: Ria Gupta
Next week plan is to advance the implementation of elk and include github issue and comments. Along with this, try to implement a method in which we can break customer reviews into 2/more sentiments without bringing incoherence or context break. This will involve checking NLTKs implementation and understanding MaxQDAs approach to such situations.

Don't miss out on more of Ria's Journey over the next several weeks! We have a series of these blogs throughout the process and you can start with the previous one here: 

Announcing Ria Gupta!
Ria's 1st week
0 Comments

3 Reasons Surveys Aren’t Trusted & How to Fix Them — With Science!

5/7/2020

0 Comments

 
Part 3 of a 3 part series on community surveys.
Part 1: Why you should use the NPS.
Part 2: How to install the NPS system
We’re hypocrites.  We can admit it.

We just spent an inordinate amount of time across the previous 2 blogs touting the huge success of a 2-question survey, the Net Promoter Score (NPS). 

For 2 weeks we’ve covered how and why the Net Promoter Score was one of the single best systems to start measuring your community audiences... and here we are now, telling you the NPS survey is, although useful, fundamentally flawed.

What gives? 
​

​Here’s the truth:
​Social scientists and businesses use surveys 
​very differently, and businesses usually do it wrong.
Don’t get us wrong. Survey systems like the Net Promoter Score, Customer Satisfaction, and Sense of Community have been used for a long time in business, to great success. 

But the reason most CEOs and Data Analysts just take a glance at the graphs, rip percentages out of context, and abandon them in their Survey Monkey account until next year, is because there are some fundamental problems with how businesses view the almighty survey. 

In this blog, we’re going to go over 3 pitfalls to survey production, delivery, and analysis that have caused the average Marketer and CEO to distrust their community’s responses.
But…
​To prove we’re no negative Nancy and to stand by our promotion of the Net Promoter Score in the past two blogs, we’re also going to provide you grounded and simple solutions to avoid, fix, or altogether improve your survey implementations and convince your higher-ups to trust your respondents’ feedback.

Here we go!
The problem with capturing customer sentiment: 5 reasons people distrust qualitative data

Problem 1: There's too much detailed data
​to sort through & not enough time

Uncharted Waters: RESCQU.NET community census reportRESCQU.NET Report
When I was working at RESCQU.NET, our typical "community census" involved weeks of meetings and a lot of back and forth between myself and my volunteers. 

We would spend tons of time figuring out what we needed to ask of our members, how best to ask it, and what we were looking for in an answer that would really matter.  

We then spent several weeks working tirelessly to market the thing. We'd put out preliminary feelers, publish the survey, and bam, just like that, we'd get 1500 strongly worded opinions responses we had no real idea what to do with.  

One of the ways we made processing easier on us was to limit the amount of "qualitative data" we'd collect because each question equaled an opinion, and that meant 1500 people times 10 comment boxes.  

It was simply too much, so we structured the survey to make it easier for us to handle. 

I've found that the same general approach happens in any company of any scale. Qualitative data is just a fire hose no one wants to turn on and if you do, no one wants to go through it. Even if it holds information that will save you a dumpster fire PR disaster, or generally improve your service, excellent. It's still not likely that you'll go through it. 

The issue here is that most people pick questions intended to reduce the workload later on and, in doing so, limit the responses you receive. Alternatively, many go the other way and produce a survey with so much data that making sense of it becomes insurmountable.  

As a result you're incentivized to ignore the qualitative data, which is the bulk of the survey's value. 

To solve this problem instead of working to avoid it, learn how to best process the information you're receiving. We recommend learning how to "abstract" or tag your data for quick and easy tallying later. 

We'll have a full blog on how to tag your data and "abstract" it into easy-to-digest themes in a few weeks so be sure to check back here for that, but here's the gist:

To abstract data easily, port the data into a Word doc or spreadsheet and place comments on the feedback you find interesting. In that comment, use a simple 1 or 2-word phrase that encapsulates the theme of the statement. Note down what you mean by that term and then use that term each time you see similar feedback. Eventually, you'll see that theme pop out of the text frequently. ​
It's like highlighting the important ideas in a book. When a term comes up 40 times, it's probably more important than terms that turn up once or twice. Eventually, you'll have a count of themes and concepts that jump out at you as trends and patterns you can act on.

Problem 2: Only a few get to
​(or even want to) speak

“The only people who will take your survey,
are people who take surveys."

This next issue has less to do with setting up the survey to succeed and more to do with the audience receiving the survey.  

One of the most common issues with surveys is that they require a person to take time out of their day to do something they weren’t planning to do, and put in effort they weren’t initially anticipating.  

3 different kinds of “fallacies” rear their ugly heads here and build on each other to create a nasty issue with your resulting qualitative data set. 

And if you can’t sidestep these fallacies, your survey is bunk. These fallacies are the main reasons data nerds cite when they poo-poo the idea of collecting opinions via survey.

I’ll explain each before we get into ways to look out for them.

Fallacy 1: Vocal Minorities or Polarized Involvement
In general, only about 2% of any community will be labeled “power-users.” These are the people who are ALWAYS talking and always giving opinions. Usually, they’re also the ones you interact with and trust the most.  

On the flip side, detractors tend to be hyper-vocal about their opinions. As the saying goes, “Negative PR is about 10 times stronger than good PR.”

And then there’s the middle.  

Fence-sitters tend to be less vocal and less invested. So getting their opinion is difficult. That means you’ll get biased answers from your polarized users, and fewer from them.

Fallacy 2: Survey Fatigue or more broadly The Diminishing Value of Work
You’ve likely run into the term Survey Fatigue before, but you probably haven’t spent much time digging into the theory behind it.  

The diminishing value of work refers to the initial value a respondent feels completing the survey is worth at the beginning, and how that value is impacted as they move through it. There is a certain amount of commitment required for a person to perform any action, and this occurs every time a member participates in your community.

Each survey question is an additional amount of work. As effort is put into the survey, the value of that survey may become “less worth it.” Eventually, the value of the survey and how much effort they’ve put into it is no longer justified, and they click off. 

Many people view this as a survey’s length and how long the questions are, but in reality, short or long doesn’t matter. It’s about imparting enough value before, during, and after they fill out the survey, that they feel their action is still worthwhile by the last question. 

Fallacy 3: The Spiral of Silence 
This last fallacy is less known, but you can think of it as the ultimate consequence of letting fallacies 1 and 2 get too far out of hand.  

The vocal bias skews our data to favor the involved. Fence-sitters won’t see as much value but are still important. If you make decisions based on the more vocal than over time fence-sitters lose any sense of influence they did have and begin to think their opinion, if they had provided it, wouldn’t have made a difference.   

So they start to think their opinion isn’t valued or you won’t listen to them. Then they intentionally refuse to count it. As a result, their ideas aren’t heard, and their voices DO become of less value.    

If this sounds a lot like a certain country’s political situation - you’re right. It’s the exact same mechanism, and it happens at every level of a community; small group to policy. 

Now let’s talk about solutions. 
To get around these fallacies there are a lot of tactics and fail-safes you can implement.  A lot of organizations will apply extrinsic rewards like raffles and badges that have a more stable “value” to their surveys to ensure the value is viewed more fairly.

It should be no surprise as community managers at SC.O that we don’t recommend that approach.  Extrinsic reward is a great way to devalue the intrinsic value of influence by way of participation. It’s got less value for the work if you give them something detached from your brand.

 On top of this, the quality of the submissions you get from those who simply want the reward at the end of the survey may not give responses that match in quality with those who are driven by intrinsic motivations.

Instead we recommend making the work smaller and spread out over time by adding 1-2 question surveys like the NPS to your regular community management or social media campaigns.  Then have real public conversations that credit those thinkers and use those results to perform a transparent action. 

The questions will encourage “passive engagement” rather than require active commitment so the effort is lower and the conversation is viewed as valuable.  It will also pull some of your “lurkers” and “fence-sitters” out of their holes if you spin the conversation toward them.  Consider priming an audience before the survey with a#LoveOurLurkers campaign!

 


​You should also make this easier on yourself!

Collect, tag and measure your community's passive comments across all your social channels in one place by implementing our Social Currency Metrics System for free!
Install it in under 1 hour
Picture

 

​PROBLEM 3: MOST SURVEYS ARE noT
"SCIENce ENOUGH” TO JOIN THE SCIENCE CLUB

Let’s get elementary now..
“If your survey uses the scientific method over
the social-scientific process, 
you’re not collecting
​your data correctly, at all."
traditional scientific method
The scientific method is predicated on the systematic manipulation of “variables". 

​What is the cause-effect relationship between your studied thing and your hypothesis?  The idea is to control as many variables as possible and test the relationships between 1-3 unknown variables. This allows you to solidify correlations into findings and then theories.


This works great in lab environments, on problems with clearly defined answers, when different approaches have clear upsides and downsides, or with scientific principles that are the same no matter where you go.

But that’s simply not reality when you start adding people, culture, and social structures throughout the big wide world to the mix.

People are too diverse and do things for too many different reasons.  Often their actions can only be defined correlationally.

And that is why the great social-scientists of the 1900s built on the scientific method with the lesser known but ridiculously impactful “social-scientific process”.


The Social-Scientific process | SociallyConstructed.Online
The primary reason people believe that data collected from surveys is highly subjective is because the data stopped at step 3 in this larger process.  You collected it, looked at it, found some cool stuff, and said, “huh, looks like this is a thing”.  It doesn’t have the ability to solidify any correlations you make into clear causal effects. 

The social scientific process creates objective data out of subjective data by taking the cause-effect relationship of the scientific method further; it tests the environmental factors at the same time as the variables using the rule of generalization. 

For example, on the traditional scientific method your survey goes around it once:  
  1. You observed trends in your community
  2. You wrote a survey about it 
  3. You wrote your questions specifically to suss out those variables
  4. You got your results, analyzed them, and made a report
  5. You disseminated them to the powers that be

This same process makes a really solid go at steps 1-3 of the social method, but it stops at the rule of generalization.  
​

It doesn’t investigate the limitations of those hypotheses, it doesn’t root out fallacious conclusions, it doesn’t generalize to wider audiences, and it doesn’t test the limits of what you’ve learned so you know where the correlation ends.

If you do the survey using the social-scientific process it goes around the scientific method a full 3 times before you get “results”. 

So, this social-scientific method is the reason we love the Net Promoter Score.  If you implement the NPS like we taught you in our prior blogs, it will cover a full go-around of the social-scientific process as it happens over and over again.

To Conclude

The Net Promoter Score is great because it makes active data collection passive collection, it is continually available, it only prompts for comments when people really want to provide them, and it's based on emotions rather than pre-assessed logic.  

So to conclude, we don’t want to discourage you from implementing these awesome community management tools. We are not against surveys. 

What we are saying is that the way you implement these community analytics tools needs to be done with these issues in mind.  Each of these problems is a reason people have started to mistrust qualitative data for the past several decades.  
​
We aim to fix that by making qualitative data easier to collect and analyze, more objective, and harder to read falsely into by taking your use of qualitative data further with our Social Currency Metrics System.

Check out the system and how to build your own for free here, or read the previous two parts of this blog!
​
Picture
Part 1: What is NPs?
Part 2: HOW TO INSTALL THE NPS
0 Comments

    Categories

    All
    Analytics
    Announcement
    Case Studies
    How To
    Implementation
    Net Security
    Qualitative Data
    Theory

    RSS Feed

    See All Blogs

Navigation
What is the SCMS?
Access the SCMS Demo
Claim your 1-hour consultation
The Socially Constructed blog
Contact Us
Vertical Divider
Our Best Advice
How to implement the SCMS
​in your preferred platform.
What is the Net Promoter Score and why use it?
3 reasons surveys aren't trusted, and how to solve them, with science!
Who we are and why it matters for you
Ria Gupta's journey for Google summer of Code!
Vertical Divider
Contact Us
​307-274-5516
Samantha@sociallyconstructed.online
Dylan@sociallyconstructed.online
facebook-icon-socially-constructed
Linkedin-icon-socially-constructedPicture
youtube-icon-socially-constructed
github-icon-socially-constructed
sociallyconsructed.online twitter logo
Community Charter
Policies, Terms & Conditions
Licenses
SociallyConstructed.Online LLC
6715 Autumn Ridge Dr. Unit 2
Fort Collins, Colorado 80525
© Copyright 2020
SociallyConstructed.Online
All Rights Reserved
Creative Commons License
This website and all works connected to SociallyConstructed.Online are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License unless otherwise noted.
  • Blog
  • SCMS
  • services
  • Contact Us