Venia is a regular organizer and panelist CHAOSScast, where we discuss community health and measurement . As a part of this continuing partnership we'll regularly be cross-posting episodes in the CHAOSS community here for you to listen to!
If you'd like to see them all, head over to CHAOSS.community or to your favorite podcast app! For this episode Ruth Cheesley of Mautic, an opensource marketing platform let us in on how she is measuring community health and growth for a non-coding audience. Mautic is a big community![]()
Ruth's biggest issue with Mautic at Aquia has been an issue of identifying technical debt across several demographics throughout the large community.
One one end she has developers who must build solutions for marketers and on the other end marketers who will never be coding contributors. Her biggest focus was making sure that parts of the community who thought the project was dead or that changes were not being implemented, saw the parts of the community that was truly trying to resolve those issues. Her answer was a publicly available community dashboard so the community could directly interact with the data. She did this by using GrimoireLab's community health dashboards and CHAOSS metrics which conveniently include our own Social Currency Metrics System! In the podcast we are discussing her case study using the system to re-engage and combine parts of the community with others. Post your opinion below: How do you feel about publicly releasing Metrics to your community members and to the general public? would you feel safe doing that with your community?
0 Comments
This past Friday I was given the opportunity to do a small guest post for a local community here in Fort Collins, called "Community Catalysts."
The network is run by this amazing community expert and teacher - Darrick Hildman. I've placed my full interview in the image below, but he asked me a question I think is worth a good amount of introspection and exploration for your own community... "What does a 'meaningful community' look like to you?"
In my response I started with some silly response about how important it is that we be focused on the mission and values of our community, but before I knew it, I was second guessing what he meant by "meaningful".
The only way I could answer the question was by comparing the communities I've built to my family and friends. The people in my life that mean something. And my answer surprised even me: There is a progression in your community's relationship with your members. A "useful" community provides value to its members. A "successful" community accomplishes the goals and mission of that community, by imparting value to its members. An "engaged" community encourages relationships to grow beyond the value a member initially desires and encourages them to give back. But the #1 metric I use to determine how "meaningful" a community is... Self-disclosure. A member's willingness to disclose more about themselves shows a genuine affinity with other members. A meaningful community is exemplified in those moments when a person's relationship with others in the community transcends the community's purpose or the value-added engagement that keeps them interacting. A meaningful community creates a real feeling of affiliation that proves to a member that this is where they want to spend their time. Creating a space for self-disclosure is hard
Creating a plan that fosters a meaningful community is just as important as making one that performs according to the goals of the brand. It requires a plan and a good amount of thought.
Clearly, I hadn't thought about it enough. So, I thought I'd pay it forward to you - what tactics are you using to create "meaning" in the hearts and minds of your community members? How are you making the community better?
And while you're thinking about it, I invite you to read the rest of my post, join us in the community catalyst group and check out Darrick's new consultancy!
This week's collaboration meeting on the 26th was, prolific.
In this meeting we discussed the trends observed by tagging the data and discussed the future path of the SCMS's development. As expected, when we manually tagged data, out of all 5 tags, the most observed tag was “Utility”, because GrimoireLab provides very useful tools for software development analytics. Further, we discussed the noise elements observed in the data and what to do with them. IRC messages rendered some noise, and those records could not be tagged. Since we can't leave these records untagged, we have planned to remove such instances from the google sheets implementation, so that we could focus on meaningful texts. Some examples of noise "text" include: “abc_is now known as xyz” or “ChanServ sets mode: +o collabot`”
We also decided to utilize the SCMS' “Category” which is a way to classify our records for more fruitful analysis by breaking down information into specific keyword sets.
For instance, a comment in which a user is asking for help in any issue might be indicative of ‘troubleshooting’. For GrimoireLab, since the majority of GitHub comments were related to troubleshooting, we decided to split this category into 2 categories namely “Incoming Request” and “Technical Support”. This was because having categories as precise as possible would help in analyzing data more efficiently. Categories like “Interpersonal”, “Operational” and “Transactional” were also put down to be added later. Basic Visualisations![]()
Apart from tagging data, I also spent the week learning the process of making a visualisation.
I made some of the basic visualizations using the index which contained randomly tagged data. Visualisations I've made so far included a pie chart of 5 different tags, a bar chart of the number of comments received per week, data indicating the number of conversations from different channels and I"ll produce more in the comming weeks. Thanks for reading! This week marks the end of the first coding period, It has been a wonderful month coding, brainstorming, blogging, interacting with the so-awesome mentors! ? That’s it for this week.? Make sure you have a look at the project updates on Github #ria18405/GSoC. All questions and comments are welcomed! Stay tuned for more weekly updates. ?
Airtable's limitations![]()
In the 2nd week, I had exported the extracted data to an Airtable view.
I didn’t realise it until just last week but Airtable has a limit of 1200 records per base only for a free version and the SCMS could be implemented in any spreadsheet software as far as GrimoireLab is concerned. Our initial goal was to collect as much data as possible to represent the community’s sentiments holistically and make SCMS usable by other open-source communities as well. So we decided to shift the implementation to Google Sheets. Google Sheets has a limit at 5 million cells, which is adequately enormous [yeah....0.o]. So, I collected all data from Github, Mailing lists of Grimoirelab and IRC channel of CHAOSS. Then, randomly tagged it (as done in Week 2), and exported to Google Sheets using an API. After looking at the data carefully, we also noticed that we had Github comments by Coveralls indicating the coverage increased or decreased. Other than this, we had lots of IRC messages indicating a person has joined the channel or has left the channel. This data is not providing us with any additional information about the community and is not a user sentiment. So, we planned to remove all those unnecessary records. We collected around 5.6k filtered data from Github, Mailing list, IRC channel. The Codex
A major part of the week was planned to be devoted to building a Codex Sheet.
Codices help us to rely on Qualitative data in an objective sense over time much like Quantitative data. To reduce the subjectivity of this qualitative data, it is imperative to define a codex table which can help in tagging data points accurately by better defining and honing in on the purpose of tags. It also helps in collaborating with the team to keep similar ideas running. Codices contain the definitions of metrics within the organisation (here it is Grimoirelab), and an example to illustrate the definition. It also consists of “when to use” and “when-not-to-use” a metric.
I also made a short rough draft of an overview of SCMS to be published for the CHAOSS blog!
Other than this, since we have a full version of data present in google sheets, I converted this entire Excel sheet data to the ElasticSearch index (precisely as done in Week 2). The only difference is the number of records being used. Earlier, I could only have limited records containing the “extra_scms_data” in the Enriched index. Now, every meaningful record (i.e. ignoring comments by coveralls) has an additional field present in its ElasticSearch index. And the weekly meeting...
Per usual, I had a weekly meeting with the mentors on 19 June’20, minutes are here.
We discussed Dashboarding and expanding on the SCMS. We’ll be having a collaboration meeting on Friday,26 for discussing the findings of tagged data. Till then, in the next week, I’ll be focusing on writing tests and making a basic dashboard. ? That's it for this week.? Make sure you have a look at the project updates on Github #ria18405/GSoC. All questions and comments are welcomed! Stay tuned for more weekly updates. ?
This week was oriented according to the timeline.
We had planned first to make the pipeline ready, and then to move forward with the rest of the procedures. In the first week of the coding period, I had extracted the required data from ElasticSearch and converted it to the default SCMS implementation's Airtable view. This week, I have randomly tagged the datasheet with five metrics and have linked the additional data back to ElasticSearch via a ‘study’ called ‘enrich_extra_data’. To explain what this exactly means and looks like, I’ll explain in detail the steps involved. The first step was to randomly tag the dataset by all possible combinations of the social currency parameters; Transparency, Utility, Consistency, Merit and Trust - in an Excel Sheet. Here, we have added another column in the excel sheet by the name of ‘scms_tags.
The second step is making a python script Excel2Json which could convert Excel to a specific type of JSON which will be used as an input to the study. You can find the JSON here.
Now, in step 3, we need to execute a study called ‘enrich_extra_data’ in grimoirelab-ELK.
Edit Setup.cfg according to the study, input the URL of JSON made above. After successfully performing the study, we can see that ElasticSearch indexes have the extra parameter of scms_tags in the dump. ? The study appends an ‘extra’ to the field name, So the field name is ‘extra_scms_tags’. The importance of the SCMS' Codex
After building out the data set and meeting with mentors on June 12th, I understood the importance of codex sheet in training the tag set up. The minutes of the meeting are here.
Defining the codex table helps to increase universality and decrease the subjectivity of the data. It also allows us to rely more on qualitative data rather than quantitative data. After the training, we discussed the path for the next week. In the next week, I’ll be making a codex table which will contain the definitions of each trend observed and will contain the ‘when to-use’ and ‘when not-to-use’ cases. Additionally, I should note that we found the limit in the number of records in Airtable, so we planned to shift the implementation to Google Sheets. In future blogs we'll be using google. This week went off well, looking forwards to the next week! ? Make sure you have a look at the project updates on Github #ria18405/GSoC. All questions and comments are welcomed! Stay tuned for more weekly updates. ?
Putting Code to Pixel; My 1st week coding
I noticed that throughout my coding history, I had always undervalued the process of facing bugs, Now, I honestly feel that bugs make the entire coding process fun! After each set of debugging exercise, you tend to understand the main tasks in a better way, plus you get a sense of achievement, however small that might be. ?
So this past monday I put on my enthusiastic boots and set off to work. I first understood which data attributes need to be extracted to maintain consistency in the output data. After this, I made a new enricher for extracting Github data for the Social Currency Metrics System. I have collected the comments made under a Github issue and under a Pull request. We decided to keep the context for each GitHub comment, to bring more clarity and reduce idea redundancy to the table. So, I retrieved the ‘Title’ of a Pull request or ‘Title’ of an issue as the ‘context’. This is because the Title of the issue/PR conveys the same message as in the following comment. Similar steps were done while extracting mails from the mailing lists. The ‘subject’ of the mail was set as ‘context’ in this case. Now, after performing an ElasticDump operation, I wrote a script called “ES2Excel” which converts data from Elastic Search indexes into a CSV file, further into an Excel format, and then further extending to an Airtable view. Now, the data obtained by mails (MBox) and comments (Github) needed to be collected together in one excel sheet. So, for this, we perform “Aliasing” on ES indexes. We have 2 enriched indexes which we need to alias as a third index which can be used for creating CSV and Excel files. The CSV sheet is then converted to an Airtable view using Airtable API. We execute ES2Excel script mentioned above on the aliased index. This Airtable view is ready for performing all tagging procedures to it. The output can be seen below. Where we are and where we're going![]()
during our meeting on 5 June 20th I was given training about the various backround social theories that made the SCMS inlucing the "Grounded Theory Analysis" method.
It was fun to relate those theories with the Social Currency Metric System. Dylan and Venia also emphasized understanding the process of Community interactions and further bringing it to a codable form. My mentors suggested that we must have a third data source other than Github and mailing lists, to avoid biased data, so we’ll also be looking towards the addition of Twitter or IRC data in the next week. For the most part next week, I’ll be focusing on converting the randomly tagged text data to ElasticSearch with the help of ‘Study’. The work done this week was nicely aligned with the timeline. Looking forward to more learning sessions! ?
Preparing for next week...
I had a meeting with my mentors on 22 May 2020 where we discussed the implementation of the Social Currency Metric System, how is the codex table created, and used.
I had done a pilot study to understand the pros and cons of using existing enriched indexes over creating new ad-hoc enriched index for SCMS. The results of this study favoured the creation of new enriched indexes. This week, I had tried to do a very interesting pre-coding period task to extract comics data from Marvel using grimoirelab tools like Perceval. This meant the creation of a new backend for marvel. [the single best sentence we've ever heard at SC.O ~ Venia] The repository can be found here. After cloning the Perceval directory, and executing perceval marvel, it will yield all comics data from perceval. Integration with ELK is left and will be continued during the coding periods at a secondary priority. After this, I had the last pre-coding period meeting with my mentors on 29 May 2020. We focused more on the implementation plan and timeline found here. We also had a detailed discussion about applying ‘keyword analysis’ to tag data on the basis of Social currency. We analysed the setbacks of using tags, one majorly being differentiating negative sentiments with positive ones. Imagine something like “I find this product to be very useful.” and “I did not find this product to be of any use.” Both these statements will have to be categorized under the parameter “Utility”, but separating these contrasting sentiments will help in creating the SCMS a more meaningful system. Finally, the time has come, which I had been looking forward to so long, the Coding Period! I hope to bring out my best during this journey ? Yay! Looking forward! ❤️
We've spent the past month launching the SCMS and SociallyConstructed.Online.
It's been a lot of work, but that work has paid off, and we're excited to unveil some of the fruits of our behind-the-scenes work, much of which has been dedicated to promoting our upcoming partnership with the CHAOSS project. This blog is promoting the 1st of many collaborative efforts between SociallyConstructed.Online and CHAOSS (Community Health Analytics Open Source Software). We are incredibly thankful for the opportunity to sit down and record a full hour-long podcast discussing the Social Currency Metrics System on CHAOSS Community's podcast, CHAOSScast. If you're interested in how SociallyConstructed.Online emerged, why we do what we do, and what we have in store in the future, give it a listen here and consider subscribing to the podcast! Venia is officially a regular on the show and we plan to have a long-standing relationship with CHAOSS moving forward. What does this mean for you?
A More Formal conversation with CHAOSS
Prior to this podcast we had a more internal presentation with the CHAOSS community after last year's Open Source Summit in San Diego. This provides a more structured, and more visual approach the Social Currency Metrics System, and CHAOSS.
We discussed some fundamental things first regarding the communication platform for meetings for the next 3 months. Along with this, we discussed the mode of updating progress and some tasks to be done prior to the coding period. He explained every component of CHAOSS, how any data point is moved from one to another, to get an overview of the community. I was made aware of several Working Groups present in the community and the official mailing lists. He proposed some tasks to help me get familiar with grimoirelab and ELK.
He gave excellent advice to make a project log Github repository and keep updating it with time. It can act like a project-tracker repository and will make it easier to track the project’s developments. It will contain all blogs and a summary of all weekly meetings. On Tuesday I met the rest of my mentors. It was absolutely a pleasure meeting them. We had a friendly session to discuss how things are going to look in the next three months. I bombarded them with my multiple questions about SCMS(Social Currency Metric System), to which they explained the concepts and the idea behind the metric system in greater detail. We discussed some details on communication platforms and blog details too. Meeting Conclusions:
The Community-Bonding Plan:
After meeting my mentors, I could feel a huge bubble of inspiration within me!!
In this blog series, at opensource.com, and on her own page, Ria is going to update us on her project every week!
Before we launch in to her first blog though...Ria, It is absolutely wonderful to have you! This blog has been cross-posted from Ria's blog with her permission... “Magic happens when you don't give up, even though you want to. The universe falls in love with a stubborn heart.” ![]()
It had been my first encounter with the open-source world.
Despite very encouraging mentors, it was very intimidating at first. I can still remember the numbness I felt after submitting my first PR. There were times when I self-doubted myself, but as it is said, the mornings bring a ray of hope along, I got back to my feet to work. I had applied for both GSoC and Outreachy under the organisation CHAOSS, and on May 04, I was euphoric after getting selected in both! I decided to continue with GSoC during the summer of 2020. The account of me getting selected for GSOC’20 came as a piece of pleasant news to me. I got selected for GSOC under the organisation CHAOSS, for the project “Implementing Social Currency Metric System in Grimoirelabs”. I will be working for the next 3 months, starting from June- Aug’20 remotely on this project for the software improvement position. My project has 4 mentors from different geographical locations.
I am glad to be a part of such a welcoming and encouraging community.
Want to see Ria's progress with the project in more real-time? check out her project tracker here!
About the Project
Community sentiment is based on the opinions and expectations of community members which is very important for framing decisions. Member input in essential decisions to public health is very important and thus, helps in better decisions which in turn help in the better framework and execution. Collecting and processing all the data like emails, comments, issues, pull requests, tweets etc. will allow community leaders to make key quality decisions regarding transparency and actionability of open source project health. Data is tagged with respect to social currency constructs like Merit, Trust, Transparency, Utility, Consistency.
By Implementing the Social Currency Metric System, we will be able to measure the value of community interactions to accurately gauge the ‘reputation’ of a community. Implementing Social Currency Metric System (SCMS) will be a huge milestone in providing a better view of project health in the open-source community. That is because, through SCMS, we are adding another dimension of social currency in the metric. Measuring community interactions, analysing it will definitely take CHAOSS one step ahead to measure project health. My ApproachMy Role in the Project
Stretch Goals
Thank you
I thank my family, friends and super helping mentors for supporting me and encouraging to work harder. ❤
I plan to dedicate myself completely to complete the project efficiently. I hope I do due justice to the project and the community :) Looking forward to loads of fun and a fantastic learning experience! “ I’m not afraid of storms, for I’m learning how to sail my ship!”
Follow the rest of her journey!
THE 2-question survey THAT saved comcast.
In 2014, a technology reviewer recorded a customer service call with Comcast to cancel his account.
The incident went so horribly viral that TIME magazine wrote about it - and the fallout did not end well. Comcast ended 2014 with the distinction of being listed below Monsanto as the worst company in America. So you might imagine my apprehension when, down on my luck and without enough freelance clients to support myself, I went to work for a newly-formed Comcast Xfinity call center in Fort Collins, Colorado. I expected a repeat of the nightmare above. Instead, I joined one of the best companies to work for in 2019. Comcast - A veritable titanic of a company renowned for its “meh” service packages and its awful customer service - went from “worse than Monsanto” to one of INC.’s top 100 companies to work for in the space of just 5 years. Using NPS, Comcast turned a doomed-to-sink titanic headed for an iceberg, on a dime like it was a 10-person rowboat.
How did Comcast change its reputation? More importantly, what can this teach us about how to run projects, companies, and communities?
TL;DR: Comcast used a 2-question survey called the Net Promoter Score. Implementing it can connect you to your constituents, empower them to speak up, and give you the community feedback you need to make big decisions. Used correctly, the NPS can revive entire communities - and, as in Comcast’s case, brand reputation. In this blog we're going to discuss how the NPS works, and what makes it so powerful. Then in Part 2 of this 3-part blog on community surveys, we're going to walk you through how you can implement it. WHY DOES THE NET PROMOTER SCORE WORK?
The Net Promoter score is a small revision to a common customer service survey question.
It takes this question:
And turns it into this question:
The average statistics nerd might see a few key design differences:
These design changes to the survey are the magic behind the NPS system.
First, the point is not to see if your customers talk to others about your brand: They do. Instead, this question is about how they feel. Removing the “numbers” cuts to the emotional hind-brain, so the NPS uses emoticons so people will flag how they intrinsically feel. The last two changes are more complex. The question asked in the NPS has three different options to end it, but that’s not because you’re asking them three separate questions. It’s actually three completely separate surveys. The Net Promoter Score accounts for the differences between:
Net Promoter Score doesn’t just account for customer feedback one time. It captures sentiment at multiple points along the Customer Value Journey, and it can capture the employees’ sides of the story as well. Collecting data at each of these junctures is the only way you can tease apart the complex feelings a person has about your brand or project. Let’s go back to the Comcast example. Contrary to popular belief, people are largely satisfied with Comcast’s internet services right up until the moment they’re not. People hate calling into Comcast for simple things like rebooting a modem. They dread the call, from being put on hold to actually dealing with a representative. Then, the representative they talk to has a large influence on how they feel about the company for days afterwards. Each progressive call mounts emotions on top of emotions. A customer could get the absolute BEST customer service - but biased interactions might get in the way. If they already loathed calling in because the product broke or their service is spotty and they were expecting a nightmare call, their satisfaction rating likely won’t reflect their overall experience appropriately. The NPS allows each customer to rate you several times, in different circumstances. Then, it compares that customer’s response to the employee’s. Still, it doesn’t matter how many times a person is surveyed: You’re still reducing a rich, full experience to an abstract, dry number. So, let’s turn to the importance of that third change to the NPS question: the comment box. why is the comments box so important for nps?![]()
As a marketer and community manager it’s difficult to stress to software developers, DevOps teams, and executives how important qualitative data like social media comments are. So, they’re often ignored as some wasteful customer service thing.
In truth, this data can be hugely helpful - if it’s used correctly. In previous research by GetApp - "How big data is used on today’s IT teams" - the #1 recommendation for using big data to make business decisions was to implement a qualitative data system. The NPS cleverly integrates qualitative data collection front and center without asking too much from respondents. Measuring it is another story as we'll discuss in Part 3, but this is a solid step. By not asking people to elaborate on their responses, in the initial question, NPS doesn’t scare away those who otherwise wouldn't spend a moment taking a survey. It allows people to click the emoji they identify with at the end of the interaction and move on. As a result, you get more - and more natural - responses than just the people willing to take the survey. At the same time however, the people who truly do have things to say have an unlimited amount of space in a defined box to say it. This is where companies can find the biggest, most important insights. the value of qualitative data
While I was working as the Marketer-in-Residence at DigitalMarketer.com a customer posted on their facebook group about a concern with DigitalMarketer over-emailing him. His post was longer than this article. Hundreds of people responded in the comments and the community manager let the conversation role. A flood of people who otherwise wouldn't have spoken, did.
This qualitative comment off the back of an email caused so much commotion that we spent 5 hours across the company discussing that Facebook post and the replies it garnered. DigitalMarketer’s email team built an entirely new email marketing deliverable out of that feedback and the company turned faulty email segmentation into one of its greatest successes. Imagine if that person had access to a system similar to the NPS prior to his comment. Digital Marketer could have gotten that response months before hand. Conclusion:
The NPS is a powerful social listening tool that introduces you to the treasure trove of insights qualitative data can offer to you.
But in truth, the NPS is pretty low-hanging fruit among qualitative analysts. You can install it in a day if you have all your ducks in a row and there’s tons of insights left on the table for you to pick up as you get better. As the old adage goes; “First get good. Then get great.” In Part 2 of this blog series, we'll show you how to install an NPS system yourself. Let us know if you need help getting started, find hang-ups, or have any questions! I’ll watch the comments section below, and if you have something more involved you would like to discuss, you can email me at Samantha@SociallyConstructed.Online. Does NPS sound good? |