We've gotten involved in the CHAOSScast open source community podcast as a regular part of our operations here a SC.O. Venia is an organizer and panelist on the podcast where we discuss community health, measurement, and more. Dylan edits and publishes some podcasts.
We'll regularly be cross-posting episodes in the CHAOSS community here for you to listen to.
If you'd like to see them all, head over to CHAOSS.community or to your favorite podcast app!
For this episode highlight we're also truly excited say we interviewed one of the most well-known community managers and strategists in the space - Jono Bacon
This is one of the best moments of my career really (Venia).
When i started my nonprofit RESCQU.NET and fell in love with Community Management, my partner was working for an open source company called Canonical. This was the same Canonical that Jono began his community management career in and over time I got the opportunity to speak with him about community.
Over the course of my academic and industry careers I continued to stay in touch, speak with him, get advice, read his books the art of community and people powered, and now in this podcast I'm talking with him about the Customer Value Journey, Community Onboarding, what it means to be metrics-obsessed, and starting your career in community management.
I am truly blessed to have done this podcast with Nicole and Brian. Thank you both.
Jono bacon is an important inspiration for us. Here are two videos where he answers our questions directly!
This week's collaboration meeting on the 26th was, prolific.
In this meeting we discussed the trends observed by tagging the data and discussed the future path of the SCMS's development.
As expected, when we manually tagged data, out of all 5 tags, the most observed tag was “Utility”, because GrimoireLab provides very useful tools for software development analytics.
Further, we discussed the noise elements observed in the data and what to do with them. IRC messages rendered some noise, and those records could not be tagged. Since we can't leave these records untagged, we have planned to remove such instances from the google sheets implementation, so that we could focus on meaningful texts.
Some examples of noise "text" include:
“abc_is now known as xyz” or “ChanServ sets mode: +o collabot`”
We also decided to utilize the SCMS' “Category” which is a way to classify our records for more fruitful analysis by breaking down information into specific keyword sets.
For instance, a comment in which a user is asking for help in any issue might be indicative of ‘troubleshooting’. For GrimoireLab, since the majority of GitHub comments were related to troubleshooting, we decided to split this category into 2 categories namely “Incoming Request” and “Technical Support”. This was because having categories as precise as possible would help in analyzing data more efficiently. Categories like “Interpersonal”, “Operational” and “Transactional” were also put down to be added later.
Apart from tagging data, I also spent the week learning the process of making a visualisation.
I made some of the basic visualizations using the index which contained randomly tagged data. Visualisations I've made so far included a pie chart of 5 different tags, a bar chart of the number of comments received per week, data indicating the number of conversations from different channels and I"ll produce more in the comming weeks.
Thanks for reading!
This week marks the end of the first coding period, It has been a wonderful month coding, brainstorming, blogging, interacting with the so-awesome mentors! ?
That’s it for this week.? Make sure you have a look at the project updates on Github #ria18405/GSoC.
All questions and comments are welcomed! Stay tuned for more weekly updates. ?
In the 2nd week, I had exported the extracted data to an Airtable view.
I didn’t realise it until just last week but Airtable has a limit of 1200 records per base only for a free version and the SCMS could be implemented in any spreadsheet software as far as GrimoireLab is concerned.
Our initial goal was to collect as much data as possible to represent the community’s sentiments holistically and make SCMS usable by other open-source communities as well.
So we decided to shift the implementation to Google Sheets. Google Sheets has a limit at 5 million cells, which is adequately enormous [yeah....0.o].
So, I collected all data from Github, Mailing lists of Grimoirelab and IRC channel of CHAOSS. Then, randomly tagged it (as done in Week 2), and exported to Google Sheets using an API.
After looking at the data carefully, we also noticed that we had Github comments by Coveralls indicating the coverage increased or decreased. Other than this, we had lots of IRC messages indicating a person has joined the channel or has left the channel. This data is not providing us with any additional information about the community and is not a user sentiment. So, we planned to remove all those unnecessary records. We collected around 5.6k filtered data from Github, Mailing list, IRC channel.
A major part of the week was planned to be devoted to building a Codex Sheet.
Codices help us to rely on Qualitative data in an objective sense over time much like Quantitative data. To reduce the subjectivity of this qualitative data, it is imperative to define a codex table which can help in tagging data points accurately by better defining and honing in on the purpose of tags. It also helps in collaborating with the team to keep similar ideas running.
Codices contain the definitions of metrics within the organisation (here it is Grimoirelab), and an example to illustrate the definition. It also consists of “when to use” and “when-not-to-use” a metric.
I also made a short rough draft of an overview of SCMS to be published for the CHAOSS blog!
Other than this, since we have a full version of data present in google sheets, I converted this entire Excel sheet data to the ElasticSearch index (precisely as done in Week 2). The only difference is the number of records being used. Earlier, I could only have limited records containing the “extra_scms_data” in the Enriched index.
Now, every meaningful record (i.e. ignoring comments by coveralls) has an additional field present in its ElasticSearch index.
And the weekly meeting...
Per usual, I had a weekly meeting with the mentors on 19 June’20, minutes are here.
We discussed Dashboarding and expanding on the SCMS. We’ll be having a collaboration meeting on Friday,26 for discussing the findings of tagged data. Till then, in the next week, I’ll be focusing on writing tests and making a basic dashboard. ?
That's it for this week.? Make sure you have a look at the project updates on Github #ria18405/GSoC. All questions and comments are welcomed! Stay tuned for more weekly updates. ?
This week was oriented according to the timeline.
We had planned first to make the pipeline ready, and then to move forward with the rest of the procedures. In the first week of the coding period, I had extracted the required data from ElasticSearch and converted it to the default SCMS implementation's Airtable view. This week, I have randomly tagged the datasheet with five metrics and have linked the additional data back to ElasticSearch via a ‘study’ called ‘enrich_extra_data’.
To explain what this exactly means and looks like, I’ll explain in detail the steps involved.
The first step was to randomly tag the dataset by all possible combinations of the social currency parameters; Transparency, Utility, Consistency, Merit and Trust - in an Excel Sheet. Here, we have added another column in the excel sheet by the name of ‘scms_tags.
The second step is making a python script Excel2Json which could convert Excel to a specific type of JSON which will be used as an input to the study. You can find the JSON here.
Now, in step 3, we need to execute a study called ‘enrich_extra_data’ in grimoirelab-ELK.
Edit Setup.cfg according to the study, input the URL of JSON made above.
After successfully performing the study, we can see that ElasticSearch indexes have the extra
parameter of scms_tags in the dump. ?
The study appends an ‘extra’ to the field name, So the field name is ‘extra_scms_tags’.
The importance of the SCMS' Codex
After building out the data set and meeting with mentors on June 12th, I understood the importance of codex sheet in training the tag set up. The minutes of the meeting are here.
Defining the codex table helps to increase universality and decrease the subjectivity of the data. It also allows us to rely more on qualitative data rather than quantitative data.
After the training, we discussed the path for the next week. In the next week, I’ll be making a codex table which will contain the definitions of each trend observed and will contain the ‘when to-use’ and ‘when not-to-use’ cases.
Additionally, I should note that we found the limit in the number of records in Airtable, so we planned to shift the implementation to Google Sheets. In future blogs we'll be using google.
This week went off well, looking forwards to the next week! ?
Make sure you have a look at the project updates on Github #ria18405/GSoC.
All questions and comments are welcomed!
Stay tuned for more weekly updates. ?
Preparing for next week...
I had a meeting with my mentors on 22 May 2020 where we discussed the implementation of the Social Currency Metric System, how is the codex table created, and used.
I had done a pilot study to understand the pros and cons of using existing enriched indexes over creating new ad-hoc enriched index for SCMS. The results of this study favoured the creation of new enriched indexes. This week, I had tried to do a very interesting pre-coding period task to extract comics data from Marvel using grimoirelab tools like Perceval. This meant the creation of a new backend for marvel. [the single best sentence we've ever heard at SC.O ~ Venia] The repository can be found here.
After cloning the Perceval directory, and executing perceval marvel, it will yield all comics data from perceval. Integration with ELK is left and will be continued during the coding periods at a secondary priority.
After this, I had the last pre-coding period meeting with my mentors on 29 May 2020.
We focused more on the implementation plan and timeline found here.
We also had a detailed discussion about applying ‘keyword analysis’ to tag data on the basis of Social currency. We analysed the setbacks of using tags, one majorly being differentiating negative sentiments with positive ones. Imagine something like “I find this product to be very useful.” and “I did not find this product to be of any use.” Both these statements will have to be categorized under the parameter “Utility”, but separating these contrasting sentiments will help in creating the SCMS a more meaningful system.
Finally, the time has come, which I had been looking forward to so long, the Coding Period! I hope to bring out my best during this journey ? Yay! Looking forward! ❤️
Here is Week 2: Launching the SCMS in Airtable!
This blog has been cross-posted from Ria's blog with her permission...
The social bonding period continues
I had a meeting with all of my four mentors on Friday, 15 May’20. The details are here.
We largely discussed the past 2 weeks' progress and understanding of SCMS (Social Currency Metric System) in more detail. It was similar to a training to understand the importance of qualitative data over quantitative. The main agenda of the training was “Why qualitative data is rejected in business, and how reframing its collection using the SCMS makes it useful in businesses?” It was a very informative presentation delivered by Samantha and Dylan.
For the next meeting, we’ll be discussing the concrete implementation of the project both technically and theoretically. For this, I have some implementation ideas written in my Project Proposal, and I had a meeting with Valerio to discuss the pros and cons of different approaches. One approach is to use the already present enriched index, and the other is to create new ad-hoc indexes.
What I did this past week
The plan for next week
Next week plan is to advance the implementation of elk and include github issue and comments. Along with this, try to implement a method in which we can break customer reviews into 2/more sentiments without bringing incoherence or context break. This will involve checking NLTKs implementation and understanding MaxQDAs approach to such situations.
Don't miss out on more of Ria's Journey over the next several weeks! We have a series of these blogs throughout the process and you can start with the previous one here:
We discussed some fundamental things first regarding the communication platform for meetings for the next 3 months. Along with this, we discussed the mode of updating progress and some tasks to be done prior to the coding period. He explained every component of CHAOSS, how any data point is moved from one to another, to get an overview of the community. I was made aware of several Working Groups present in the community and the official mailing lists. He proposed some tasks to help me get familiar with grimoirelab and ELK.
He gave excellent advice to make a project log Github repository and keep updating it with time. It can act like a project-tracker repository and will make it easier to track the project’s developments. It will contain all blogs and a summary of all weekly meetings.
On Tuesday I met the rest of my mentors. It was absolutely a pleasure meeting them. We had a friendly session to discuss how things are going to look in the next three months. I bombarded them with my multiple questions about SCMS(Social Currency Metric System), to which they explained the concepts and the idea behind the metric system in greater detail. We discussed some details on communication platforms and blog details too.
The Community-Bonding Plan:
After meeting my mentors, I could feel a huge bubble of inspiration within me!!
This 3-Step Guide Will Install an NPS System In Your Community.
The Net Promoter Score, or the NPS, is a very unique 2-question survey borne out of the customer service industries of Amazon, Netflix, and other internet giants.
The NPS asks for your target audience to describe how they feel about you. But it asks them at several intervals of their customer value journey, and then each time it asks, it offers a completely optional and unassuming (but hugely rewarding) comment box below it.
It may seem simple, but the Net Promoter Score is almost single-handedly responsible for measuring essential quantitative data in the biggest companies, and as we covered in part 1 of this blog series, it's no exaggeration to say that it saved Comcast's reputation.
This blog will guide you through planning, designing, and implementing the Net Promoter Score for your community, brand, and/or team in 5 steps.
What Implementing The NPS Will Take:
We expect that executing these steps to will take 1-2 weeks of fairly light work.
Preparing to implement the NPS is mostly about getting all of the stakeholders into one room to decide where the survey should be placed, and how it will be supported.
Once the system is set up, you can likely expect it to take about 1 hour per week to analyze what comes out of it.
Note: This blog won't cover the contents of the survey. Review part 1 of this blog series for that.
1. Where Does the Target Audience Meet You?
As with all new analytics systems, planning is your first step.
List all the direct points of contact you have with your community, customers, or target audience. This is where your people are talking to them. Regular emails? Support tickets? Over the phone? Where are your touch-points on social media?
Then list all the indirect points of contact your target audience has with your brand. This means they’ve found your brand and your project, but they’re not interacting with your people. Examples of indirect contact include landing pages, websites, FAQ pages, YouTube how-to videos, and so on.
Next, consider who the people supporting these channels may be. They could be support representatives, lead posters on a forum, volunteers, and employees, but also contractors, admins, community members, and the like. Now you know who you'll need to have a discussion with to implement an NPS system across your organization.
So set up a meeting with those who are involved in running each of these channels. Reach out to them to discuss what your goals are before you set up an NPS system. Share this blog with them to get them to buy in, and get some feedback on how they would prefer to receive the NPS survey from those filling it out.
By the end of Step 1's meeting, you should have an idea of how and where you should implement the NPS survey most effectively. This implementation will probably look different for each of the channels you target, but since it's just a question and a comment box, it’s usually pretty low effort.
Take an email going out to those that directly interact with you, like a weekly newsletter. You can embed the survey directly into the bottom of the email and each of the numbered faces would lead to a different landing page where you take their comment.
If that isn't going to be particularly helpful because you don't rely on emails to communicate within your community, you can put the NPS at the bottom of a Support landing page, or it could be a pop-up on a thank-you page once they've completed a specific action.
It could even be a podium, as with this McDonald’s Kiosk. Regardless, it should be implemented in the easiest way possible for people to provide their responses.
2. Implement and Prepare To Score the Data
The next step is to put the NPS in the places you've just isolated. Once you've done that, you're ready to go!
Well... almost ready to go. Now you have to figure out how to support it. Don't worry, we can help!
When the reports come rolling in, don’t consider this a one-and-done survey situation. If you don't use it to discover community member pitfalls and figure out ways to fix them, installing the NPS was useless.
People are not leaving comments to blow off steam. This is a social contract. When they write a comment, they are doing that with the expectation that you will read and consider what they have to say. Especially if they aren't the only person giving that type of feedback.
NPS is an around-the-clock system. Keep it going and add it to your analytics meeting every week. Score the data according to the following chart. Then discuss it with your project team.
How scoring works...
Promoters are the only people who really count for NPS. The weight puts the score in your audiences’ favor. These are the people who are giving you the social reputation you need to really progress your company. Most people will probably be either dissatisfied or kind of “meh” about your product. Put them in a group labelled “needs improvement.”
Subtract the percentage of your promoters from your general detractors and voila: You’ll get a number that your CEO should be happy with, when compared to others.
3. Leverage NPS Results to Improve the Score
NPS is a powerful tool that introduces a treasure trove of insights that qualitative data can provide if utilized properly.
By tracking your NPS score over time and then using the trends you find to make changes in your business, you'll find that you can make a change on Monday and see a tangible increase or decrease in your NPS score as soon as the next Friday. Over time, you can challenge your teams to hit goals based on the feedback you recieve.
In truth, NPS is pretty low-hanging but very juicy fruit among qualitative analysts. There’s tons of insights left on the table for you to pick up as you get better.
As Chris Mercer at MeasurementMarketing says;
“First get good. Then get better.”
So, if you’re ready to see what NPS can do for your company, implement it!
The limits of the Net Promoter Score
One of the most important concepts we at SC.O push is called the rule of generalization:
To truly leverage the Net Promoter Score, you need a plan for understanding the qualitative data these comments give you objectively and at scale.
It’s one thing to read the comments you receive and act on them when they impact your team. It’s quite another thing to recognize trends in the comments are limited by a few harsh realities for any survey you ever use:
Because this is a survey that relies on an action, 70% of your audience is not actively telling you what they think, just like a regular sales funnel.
So, in part 3 of this series on surveys, we're going to cover what those limitations are, and what you can do to get around them. Stay tuned for next week!
Let us know if you need help getting started, find hang-ups, or have any questions. I’ll watch the comments below and you can email me at Samantha@SociallyConstructed.Online.