Files
master/text/2_relwork.tex
wea_ondara fc1613f375 wip
2022-02-06 15:10:45 +01:00

622 lines
92 KiB
TeX
Raw Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
\chapter{Related Work}
This section is divided into three parts. The first part explains what StackExchange is, how it developed since its inception, and how it works. The second part shows previous and related work. The third section covers approaches to analyze sentiment as well as methods to analyze trends over time.
\section{Background}
StackExchange\footnote{\url{https://stackexchange.com}} is a community question and answering (CQA) platform where users can ask and answer questions, accept answers as an appropriate solution to the question, and up-/downvote questions and answers. StackExchange uses a community-driven knowledge creation process by allowing everyone who registers to participate in the community. Invested users also get access to moderation tools to help maintain the vast community. All posts on the StackExchange platform are publicly visible, allowing non-users to benefit from the community as well. Posts are also accessible for web search engines so users can find questions and answers easily with a simple web search. StackExchange keeps an archive of all questions and answers posted, creating a knowledge archive for future visitors to look into.
Originally, StackExchange started with StackOverflow\footnote{\url{https://stackoverflow.com}} in 2008\footnote{\label{atwood2008stack}\url{https://stackoverflow.blog/2008/08/01/stack-overflow-private-beta-begins/}}. Since then StackExchange grew into a platform hosting sites for 174 different topics\footnote{\label{stackexchangetour}\url{https://stackexchange.com/tour}}, for instance, programming (StackOverflow), maths (MathOverflow\footnote{\url{https://mathoverflow.net}} and Math StackExchange\footnote{\url{https://math.stackexchange.com}}), and typesetting (TeX/LaTeX\footnote{\url{https://tex.stackexchange.com}}). Questions on StackExchange are stated in natural English language and consist of a title, a body containing a detailed description of the problem or information need, and tags to categorize the question. After a question is posted the community can submit answers to the question. The author of the question can then accept an appropriate answer which satisfies their question. The accepted answer is then marked as such with a green checkmark and shown on top of all the other answers. Figure \ref{soexamplepost} shows an example of a StackOverflow question. Questions and answers can be up-/downvoted by every user registered on the site. Votes typically reflect the quality and importance of the respective question or answers. Answers with a high voting score raise to the top of the answer list as answers are sorted by the vote score in descending order by default. Voting also influences a user's reputation \cite{movshovitz2013analysis}\footref{stackexchangetour}. When a post (question or answers) is voted upon the reputation of the poster changes accordingly. Furthermore, downvoting of answers also decreases the reputation of the user who voted\footnote{\url{https://stackoverflow.com/help/privileges/vote-down}}.
Reputation on StackExchange indicates how trustworthy a user is. To gain a high reputation value a user has to invest a lot of time and effort to reach a high reputation value by asking good questions and posting good answers to questions. Reputation also unlocks privileges which may differ slightly from one community to another\footnote{\url{https://mathoverflow.com/help/privileges/}}\mfs\footnote{\url{https://stackoverflow.com/help/privileges/}}.
With privileges, users can, for instance, create new tags if the need for a new tag arises, cast votes on closing or reopening questions if the question is off-topic or a duplicate of another question, or when a question had been closed for no or a wrong reason, or even get access to moderation tools.
StackExchange also employs a badge system to steer the community\footnote{\label{stackoverflowbadges}\url{https://stackoverflow.com/help/badges/}}. Some badges can be obtained by performing one-time actions, for instance, reading the tour page which contains necessary details for newly registered users, or by performing certain actions multiple times, for instance, editing and answering the same question within 12 hours.
Furthermore, users can comment on every question and answer. Comments could be used for further clarifying an answer or a short discussion on a question or answer.
For each community on StackExchange, a \emph Meta page is offered where members of the respective community can discuss the associated community \cite{mamykina2011design}\footnote{\url{https://stackoverflow.com/help/whats-meta/}}. This place is used by site admins to interact with the community. The \emph Meta pages are also used for proposing and voting on new features and reporting bugs. \emph Meta pages run the same software as the normal CQA pages so users vote on ideas and suggestions in the same way they would do on the actual CQA sites.
\begin{figure}
\includegraphics[scale=0.47]{figures/stackoverflow_example_post}
\caption{A typical question on StackOverflow. In the top middle section of the page, the question is stated. The question has 4 tags and 3 comments attached to it. Beneath the question, all answers are listed by their score in descending order (only one answer is visible in this screenshot). The accepted answer is marked by a green checkmark. To the left of the question and answers, the score (computed via votes) is indicated.}
\label{soexamplepost}
\end{figure}
% explain SO and SE in detail and how it works (https://stackexchange.com/tour)
%- question answer platform with 174 sites for different topics, eg programming (biggest one), latex, ...
%- questions and answers in natural language
%- questions can have tags
%- questioners should post their question in the appropiate community, and formulation the question precisely, question should meet standards defined by the community
%- asker can accept 1 answer
%- question, answers up/downvoting, include voting and reputation changes from tour site, reputation == trustworthyness
%- badges and privilesges with higher reputation
%- suggestion can be made by others to improve the question, eg add tags or add/change content in the question for better finding, answering question
%- comments for questions and answers
%- each community has a meta page for discussion about community itself (not questions within the community)
%- each community uses the same software, although layout may differ from community to community but generally speaking same structure of the page
%- add pictures of typical stackexchange question page
%community driven knowlege creation process
%higher reputation also gives moderation tools (site management, flagging question offtopic, unspecific, ...) TODO add reference
%
% not only ``forum`` for fast q&a but also knowledge base
% public posts and therefore good search engine availibity eg. google
% so success: Design Lessons from the Fastest Q&A Site in the West \cite{mamykina2011design} understanding SO success
% change introduced mid august 2018
% write about that post
% include user question on how exactly it works
\section{State of the Art}
Since the introduction of Web 2.0 and the subsequential spawning of platforms for social interaction, researchers started investigating emerging online communities. Research strongly focuses on the interactions of users on various platforms. Community knowledge platforms are of special interest, for instance, StackExchange/StackOverflow \cite{slag2015one, ford2018we, bazelli2013personality, movshovitz2013analysis, bosu2013building, yanovsky2019one, kusmierczyk2018causal, anderson2013steering, immorlica2015social, tausczik2011predicting}, Quora \cite{wang2013wisdom}, Reddit \cite{lin2017better, chandrasekharan2017you}, Yahoo! Answers \cite{bian2008finding, kayes2015social}, and Wikipedia \cite{yazdanian2019eliciting}.
These platforms allow communication over large distances and facilitate fast and easy knowledge exchange and acquisition by connecting thousands or even millions of users and creating valuable repositories of knowledge in the process. Users create, edit, and consume little pieces of information and collectively build a community and knowledge repository. However, not every piece of information is factual \cite{wang2013wisdom, bian2008finding} and platforms often employ some kind of moderation to keep up the value of the platform and to ensure a certain standard within the community.
%allow communitcation over large distances
%fast and easy knowledge exchange
%many answers to invaluable \cite{bian2008finding}
% DONE How Do Programmers Ask and Answer Questions on the Web? \cite{treude2011programmers} qa sites very effective at code review and conceptual questions
% DONE The role of knowledge in software development \cite{robillard1999role} people have different areas of knowledge and expertise
All these communities differ in their design. Wikipedia is a community-driven knowledge repository and consists of a collection of articles. Every user can create an article. Articles are edited collaboratively and continually improved and expanded. Reddit is a platform for social interaction where users create posts and comment on other posts or comments. Quora, StackExchange, and Yahoo! Answers are community question and answer (CQA) platforms. On Quora and Yahoo! Answers users can ask any question regarding any topics whereas on StackExchange users have to post their questions in the appropriate subcommunity, for instance, StackOverflow for programming-related questions or MathOverflow for math-related questions.
CQA sites are very effective at code review \cite{treude2011programmers}. Code may be understood in the traditional sense of source code in programming-related fields but this also translates to other fields, for instance, mathematics where formulas represent code. CQA sites are also very effective at solving conceptual questions. This is due to the fact that people have different areas of knowledge and expertise \cite{robillard1999role} and due to the large user base established CQA sites have, which again increases the variety of users with expertise in different fields.
\subsection{Running an online community}
Despite the differences in purpose and manifestation of these communities, they are social communities and they have to follow certain laws. In their book on ''Building successful online communities: Evidence-based social design`` \cite{kraut2012building} \citeauthor{kraut2012building} lie out five equally important criteria online platforms have to fulfill in order to thrive:
1) When starting a community, it has to have a critical mass of users who create content. StackOverflow already had a critical mass of users from the beginning due to the StackOverflow team already being experts in the domain \cite{mamykina2011design} and the private beta\footref{atwood2008stack}. Both aspects ensured a strong community core early on.
2) The platform must attract new users to grow as well as replace leaving users. Depending on the type of community new users should bring certain skills, for example, programming background in open-source software development, or extended knowledge on certain domains; or qualities, for example, a certain illness in medical communities. New users also bring the challenge of onboarding with them. Most newcomers will not be familiar with all the rules and nuances of the community \cite{yazdanian2019eliciting}\footnote{\label{hanlon2018stack}\url{https://stackoverflow.blog/2018/04/26/stack-overflow-isnt-very-welcoming-its-time-for-that-to-change/}}.
3) The platform should encourage users to commit to the community. Online communities are often based on the voluntary commitment of their users \cite{ipeirotis2014quizz}, hence the platform has to ensure users are willing to stay. Most platforms do not have contracts with their users, so users should see benefits for staying with the community.
4) Contribution by users to the community should be encouraged. Content generation and engagement are the backbones of an online community.
5) The community needs regulation to sustain it. Not every user in a community is interested in the well-being of the community. Therefore, every community has to deal with trolls and inappropriate or even destructive behavior. Rules need to be established and enforced to limit and mitigate the damage malicious users cause.
%new structure:
% list community knowledge platforms
% platforms need certian mechanisms and features to live and thrive: kraut etal
% - starting a community: critical mass, enought users to attract other users who also create content
% - attracting new users: attract new users to replace leaving ones, new users should be skilled and motivated to contribute (chanllange, depends on community some accept everyone others need specific skills (Eg OSS) or qualitities (eg illness for medical suppport groupgs, etc), mew users less commitment thatn old ones, newcommers may not behave according to community standard as they dont now them
% - encoraging commitment: willingness to stay in community (increases statisfaction, les likely to leave, better performance, more contribution), harder than in companies with employee contracts, contrast to OSS (no contract, voluntarity), greter competition from other communities in contrast to rl where options are limimted by location and distance
% - encouraging contribution: online communities need contributions by users (not lurking), content is foundation of community, contributions by users follows power law (usally, also confirmed in my results)
% - regualting behavior: maintain a funtioning community, prevent troll, inappropiate behavior, limit damage if it occurs, ease of entry & exit -> high turnover
All these criteria are heavily intertwined. Attracting new users often depends on the welcomingness and support of users that are already on the platform. Keeping users committed to the platform depends on the engagement with the community and how well the system design supports this. The following sections cover the criteria 2) to 5).
\subsection{Onboarding}
The onboarding process of new users is a permanent challenge for online communities and differs from one platform to another. New users should be welcomed by the community and helped to integrate themselves into the community. This is a continuous process. It is not enough for a user to make one contribution and then revert to a non-contributing state. The StackExchange team took efforts to onboard new users better by making several changes to the site. However, there are still problems where further actions are required.
\textbf{One-day-flies}\\
\citeauthor{slag2015one} investigated why many users on StackOverflow only post once after their registration \cite{slag2015one}. They found that 47\% of all users on StackOverflow posted only once and called them one-day-flies. They suggest that code example quality is lower than that of more involved users, which often leads to answers and comments to first improve the question and code instead of answering the stated question. This likely discourages new users from using the site further. Negative feedback instead of constructive feedback is another cause for discontinuation of usage. The StackOverflow staff also conducted their own research on negative feedback of the community\footnote{\label{silge2019welcome}\url{https://stackoverflow.blog/2018/07/10/welcome-wagon-classifying-comments-on-stack-overflow/}}. They investigated the comment sections of questions by recruiting their staff members to rate a set of comments and they found more than 7\% of the reviewed comments are unwelcoming.
One-day-flies are not unique to StackOverflow. \citeauthor{steinmacher2015social} investigated the social barriers newcomers face when they submit their first contribution to an open-source software project \cite{steinmacher2015social}. They based their work on empirical data and interviews and identified several social barriers preventing newcomers to place their first contribution to a project. Furthermore, newcomers are often on their own in open source projects. The lack of support and peers to ask for help hinders them. \citeauthor{yazdanian2019eliciting} found that new contributors on Wikipedia face challenges when editing articles. Wikipedia hosts millions of articles\footnote{\url{https://en.wikipedia.org/wiki/Wikipedia:Size_of_Wikipedia}} and new contributors often do not know which articles they could edit and improve. Recommender systems can solve this problem by suggesting articles to edit but they suffer from the cold start problem because they rely on past user activity which is missing for new contributors. \citeauthor{yazdanian2019eliciting} proposed a solution by establishing a framework that automatically creates questionnaires to fill this gap. This also helps match new contributors with more experienced contributors that could help newcomers when they face a problem.
\citeauthor{allen2006organizational} showed that the one-time-contributors phenomenon also translates to workplaces and organizations \cite{allen2006organizational}. They found out that socialization with other members of an organization plays an important role in turnover. The better the socialization within the organization the less likely newcomers are to leave. This socialization process has to be actively pursued by the organization.
\textbf{Lurking}\\
One-day-flies may partially be a result of lurking. Lurking is consuming content generated by a community but not contributing content to it. \citeauthor{nonnecke2006non} investigated lurking behavior on Microsoft Network (MSN) \cite{nonnecke2006non} and found that contrary to previous studies \cite{kollock1996managing, morris1996internet} lurking is not necessarily a bad behavior. Lurkers show passive behavior and are more introverted and less optimistic than actively posting members of a community. Previous studies suggested lurking is free riding, a taking-rather-than-giving process. However, the authors found that lurking is important in getting to know a community, how a community works, and learning the nuances of social interactions on the platform. This allows for better integration into the community when a person decides to join the community. StackExchange, and especially the StackOverflow community, probably has a large lurking audience. Many programmers do not register on the site and those who do only ask one question and revert to lurking, as suggested by \cite{slag2015one}.
% DONE Non-public and public online community participation: Needs, attitudes and behavior \cite{nonnecke2006non} about lurking, many programmers do that probably, not even registering, lurking not a bad behavior but observing, lurkers are more introverted, passive behavior, less optimistic and positive than posters, prviously lurking was thought of free riding, not contributing, taking not giving to comunity, important for getting to know a community, better integration when joining
\textbf{Reflection}\\
The StackOverflow team acknowledged the one-time-contributors trend\footref{hanlon2018stack}\footref{silge2019welcome} and took efforts to make the site more welcoming to new users\footnote{\label{friend2018rolling}\url{https://stackoverflow.blog/2018/06/21/rolling-out-the-welcome-wagon-june-update/}}. They lied out various reasons: Firstly, they have sent mixed messages whether the site is an expert site or for everyone. Secondly, they gave too little guidance to new users which resulted in poor questions from new users and in the unwelcoming behavior of more integrated users towards the new users. New users do not know all the rules and nuances of communication of the communities. An example is that ''Please`` and ''Thank you`` are not well received on the site as they are deemed unnecessary. Also the quality, clearness, and language quality of the questions of new users is lower than more experienced users which leads to unwelcoming or even toxic answers and comments. Moreover, users who gained moderation tool access could close questions with predefined reasons which often are not meaningful enough for the poster of the question\footnote{\label{hanlon2013war}\url{https://stackoverflow.blog/2013/06/25/the-war-of-the-closes/}}. Thirdly, marginalized groups, for instance, women and people of color \cite{ford2016paradise}\footref{hanlon2018stack}\mfs\footnote{\label{stackoversurvey2019}\url{https://insights.stackoverflow.com/survey/2019}}, are more likely to drop out of the community due to unwelcoming behavior from other users\footref{hanlon2018stack}. They feel the site is an elitist and hostile place.
The team suggested several steps to mitigate these problems. Some of these steps include appealing to the users to be more welcoming and forgiving towards new users\footref{hanlon2018stack}\footref{silge2019welcome}\mfs\footnote{\url{https://stackoverflow.blog/2012/07/20/kicking-off-the-summer-of-love/}}, other steps are geared towards changes to the platform itself: The \emph{Be nice policy} (code of conduct) was updated with feedback from the community\footnote{\url{https://meta.stackexchange.com/questions/240839/the-new-new-be-nice-policy-code-of-conduct-updated-with-your-feedback}}. This includes: new users should not be judged for not knowing all things. Furthermore, the closing reasons were updated to be more meaningful to the poster, and questions that are closed are shown as ''on hold`` instead of ''closed`` for the first 5 days\footref{hanlon2013war}. Moreover, the team investigates how the comment sections can be improved to lessen the unwelcomeness and hostility and keep the civility up.
\textbf{Mentorship Research Project}\\
The StackOverflow team partnered with \citeauthor{ford2018we} and implemented the Mentorship Research Project \cite{ford2018we}\footnote{\url{https://meta.stackoverflow.com/questions/357198/mentorship-research-project-results-wrap-up}}. The project lasted one month and aimed to help newcomers improve their first questions before they are posted publicly. The program went as follows: When a user is about to post a question the user is asked whether they want their question to be reviewed by a mentor. If they confirmed they are forward to a help room with a mentor who is an experienced user. The question is then reviewed and the mentor suggests some changes if applicable. These changes may include narrowing the question for more precise answers, adding a code example or adjusting code, or removing of \emph{Please} and \emph{Thank you} from the question. After the review and editing, the question is posted publicly by the user. The authors found that mentored questions are received significantly better by the community than non-mentored questions. The questions also received higher scores and were less likely to be off-topic and poor in quality. Furthermore, newcomers are more comfortable when their question is reviewed by a mentor.
For this project, four mentors were hand-selected and therefore the project would not scale very well as the number of mentors is very limited but it gave the authors an idea on how to pursue their goal of increasing the welcomingness on StackExchange. The project is followed up by a \emph{Ask a question wizard} to help new users, as well as more experienced users, improve the structure, quality, and clearness of their questions\footref{friend2018rolling}.
% DONE One-day flies on StackOverflow \cite{slag2015one}, 1 contribution during whole registration, only user with 6 month of registration
% DONE Eliciting New Wikipedia Users Interests via Automatically Mined Questionnaires: For a Warm Welcome, Not a Cold Start \cite{yazdanian2019eliciting}, cold start recommender system problem for recommending newcommers arictles to read and get a feeling for how to write articles; similar to SO because new commers
% newcomers socialization, experienced users as models/mentors, positive feedback to newcomers
% DONE Do organizational socialization tactics influence newcomer embeddedness and turnover? \cite{allen2006organizational} #newcommers to organizations, actively embedding newcomers into organization, shows connection between socialaization and turnover (leaving the organization)
% DONE We Don't Do That Here: How Collaborative Editing with Mentors Improves Engagement in Social Q\&A Communities \cite{ford2018we} # mentoring new commers questions (before posting), 1 month experiment, collaborative experiment with stackoverflow team, novices got a choice upon submitting a question whether or not the want feedback from a mentor regaurding the question, if so redirect to help room where mentor reviews question and suggests changes to question, mentored questions significatly better than non-mentored ones, higher scores fewer offtopic or poor questions, novices more comfortable with mentor reviewed questions
% DONE Stack Overflow Isn't Very Welcoming: It's Time for That to Change \cite{hanlon2018stack} # passt sehr gut in die story, effort to make site more welcoming, marginalized group feel SO is a hostile and elitist place, new coders, women, people of color, etc, admitting of problem that have not been addressed (enough), mixed messages (expert site or for everyone), to little guidance for new users, pecking on new users who dont know all little things on what (not) to do (no plz and thx, low quality question -> low qualtity answer -> comments about support for low quality) or bad english, previous attempts to improve welcoming, Summer of Love (https://stackoverflow.blog/2012/07/20/kicking-off-the-summer-of-love/), The War of the Closes (https://stackoverflow.blog/2013/06/25/the-war-of-the-closes/), The NEW new “Be Nice” Policy (“Code of Conduct”) — Updated with your feedback (https://meta.stackexchange.com/questions/240839/the-new-new-be-nice-policy-code-of-conduct-updated-with-your-feedback), Mentorship Research Project - Results + Wrap-Up (https://meta.stackoverflow.com/questions/357198/mentorship-research-project-results-wrap-up?noredirect=1&lq=1) also \cite{ford2018we}, removal condesting and sarcastic comments, ideas about beginner ask page (TODO already implemted?), dont judge users for not knowing things (e.g. posting duplicates)
% DONE Welcome Wagon: Classifying Comments on Stack Overflow \cite{silge2019welcome} #all about comments, effort to make site more welcoming, staff internal rating of comments (fine, unwelcoming, abusive, 57 raters, 13742 ratings, 3992 comments)
% DONE Social Barriers Faced by Newcomers Placing Their First Contribution in Open Source Software Projects\cite{steinmacher2015social} onboarding in open source software projects, difficulties for newcomers, newcommers often on their own, barriers when 1st contributing to a project,
% Rolling out the Welcome Wagon: June Update \cite{friend2018rolling} “Ask a Question Wizard” prototype, reduce exclusion (negative feelings, expectations and experiences), improve inclusion (learn from other communities facing similar problems), classification of abusive and unwelcoming comments
%Unwelcomeness is a large problem on StackExchange; not so strong; maybe other sentence
\textbf{Unwelcomeness}\\
Unwelcomeness is a large problem on StackExchange \cite{ford2016paradise}\footref{friend2018rolling}\footref{hanlon2018stack}. Although unwelcomeness affects all new users, users from marginalized groups suffer significantly more \cite{vasilescu2014gender}\footref{hanlon2018stack}. \citeauthor{ford2016paradise} investigated barriers users face when contributing to StackOverflow. The authors identified 14 barriers in total hindering newcomers to contribute and five barriers were rated significantly more problematic for women than men. On StackOverflow only 5.8\% (2015\footnote{\url{https://insights.stackoverflow.com/survey/2015}}, 7.9\% 2019\footref{stackoversurvey2019}) of active users identify as women. \citeauthor{david2008community} found similar results of 5\% women in their work on \emph{Community-based production of open-source software} \cite{david2008community}. These numbers are comparatively small to the number of degrees in Science, Technology, Engineering, and Mathematics (STEM) \cite{clark2005women} where 20\% are achieved by women \cite{hill2010so}. Despite the difference, the percentage of women on StackOverflow has increased in recent years.
%discrimitation
% DONE Paradise Unplugged: Identifying Barriers for Female Participation on Stack Overflow \cite{ford2016paradise} gender gap, females only 5\%, contribution barriers, found 5 gender specific (women) barriers among 14 barrier in total, barriers also affect groups like industry programmers
% DONE Community-based production of open-source software: What do we know about the developers who participate? \cite{david2008community} only 5% women contribute to OSS
% DONE https://insights.stackoverflow.com/survey/2019: 7.9% women, increase since 2015: 5.8% \cite{stackoversurvey2019}
% Gender, Representation and Online Participation: A Quantitative Study \cite{vasilescu2014gender} investigation on minorities (eg women), under representation of minorities
% DONE Why So Few? Women in Science, Technology, Engineering, and Mathematics. \cite{hill2010so} women only 20 percent of bachelor degrees
% DONE Women and science careers: leaky pipeline or gender filter? \cite{clark2005women} underrepresentation in STEM
% Stack Overflow Isn't Very Welcoming: It's Time for That to Change \cite{hanlon2018stack} # passt sehr gut in die story, effort to make site more welcoming, marginalized group feel SO is a hostile and elitist place, new coders, women, people of color, etc, admitting of problem that have not been addressed (enough), mixed messages (expert site or for everyone), to little guidance for new users, pecking on new users who dont know all little things on what (not) to do (no plz and thx, low quality question -> low qualtity answer -> comments about support for low quality) or bad english, previous attempts to improve welcoming, Summer of Love (https://stackoverflow.blog/2012/07/20/kicking-off-the-summer-of-love/), The War of the Closes (https://stackoverflow.blog/2013/06/25/the-war-of-the-closes/), The NEW new “Be Nice” Policy (“Code of Conduct”) — Updated with your feedback (https://meta.stackexchange.com/questions/240839/the-new-new-be-nice-policy-code-of-conduct-updated-with-your-feedback), Mentorship Research Project - Results + Wrap-Up (https://meta.stackoverflow.com/questions/357198/mentorship-research-project-results-wrap-up) TODO also refer paper about that here, removal condesting and sarcastic comments, ideas about beginner ask page (TODO already implemted?), dont judge users for not knowing things (e.g. posting duplicates)
\subsection{Invoke commitment}
While attracting and onboarding new users is an important step for growing a community, keeping them on the platform and turning them into long-lasting community members is equally as important for growth as well as sustainability. Users have to feel the benefits of staying with the community. Without the benefits, a user has little to no motivation to interact with the community and will most likely drop out of it. Benefits are diverse, however, they can be grouped into 5 categories: information exchange, social support, social interaction, time and location flexibility, and permanency \cite{iriberri2009life}.
As StackExchange is a CQA platform, the benefits from information exchange, time and location flexibility, and permanency are more prevalent, while social support and social interaction are more in the background. Social support and social interaction are more relevant in communities where individuals communicate about topics regarding themselves, for instance, communities where health aspects are the main focus \cite{maloney2005multilevel}. Time and location flexibility is important for all online communities. Information exchange and permanency are important for StackExchange as it is a large collection of knowledge that mostly does not change over time or from one individual to another. StackExchange's content is driven by the community and therefore depends on the voluntarism of its users, making benefits even more important.
%TODO abc this seem wrong here
The backbone of a community is always the user base and its voluntarism to participate with the community. Even if the community is led by a commercial core team, the community is almost always several orders of magnitude greater than the number of the paid employees forming the core team \cite{butler2002community}. The core team often provides the infrastructure for the community and does some community work. However, most of the community work is done by volunteers of the community.
This is also true for the StackExchange platform where the core team of paid employees is between 200 to 500\footnote{\url{https://www.linkedin.com/company/stack-overflow}} (this includes employees working on other products) and the number of voluntary community members (these users have access to moderation tools) performing community work is around 10,000 \footnote{\url{https://data.stackexchange.com/stackoverflow/revision/1412005/1735651/users-with-rep-20k}}.
\subsection{Encourage contribution}
In a community, users can generally be split into 2 groups by motivation to voluntarily contribute: One group acts out of altruism, where users contribute with the reason to help others and do good to the community; the second group acts out of egoism and selfish reasons, for instance, getting recognition from other people \cite{ginsburg2004framework}. Users of the second group still help the community but their primary goal is not necessarily the health of the community but gaining reputation and making a name for themselves. Contrary, users of the first group primarily focus on helping the community and see reputation as a positive side effect which also feeds back in their ability to help others. While these groups have different objectives, both groups need recognition of their efforts \cite{iriberri2009life}. There are several methods for recognizing the value a member provides to the community: reputation, awards, trust, identity, etc. \cite{ginsburg2004framework}. Reputation, trust, and identity are often reached gradually over time by continuously working on them, awards are reached at discrete points in time. Awards often take some time and effort to achieve. However, awards should not be easily achievable as their value comes from the work that is required for them\cite{lawler2000rewarding}. They should also be meaningful in the community they are used in. Most importantly, awards have to be visible to the public, so other members can see them. In this way, awards become a powerful motivator to users.
%TODO maybe look at finding of https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.92.3093&rep=rep1&type=pdf , in discussion bullet point list: subgroups, working and less feature > not working and more features, selfmoderation
%good content (quality, quantity)
%goodies
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%new
%https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.592.1587&rep=rep1&type=pdf \cite{iriberri2009life}
% -> about community life cycle, systainablity; READMORE cap 5&6&*8*&10.3&10.4&10.5
% -> look at success factors in table IX and X
% -> look at refs
% -> look at how to integrate that with kraut etal
% TODO look for parallels between papers and stackoverflow and write somethings about how stack overflow does it
% split into growth and sustainablity capters (maybe, depends on how well i can be split)
% IMPORTANT: recognize user contributions, with goodies \cite{iriberri2009life}
% community management (social managment)
% -> voluntarism
% -> reasons user would do that: altruistic(do good for the community), or selfish reasons (recognition from others (superiors), promotions, etc.) \cite{ginsburg2004framework}
% -> even if community is lead by paid employees, volunteers to most of the community work \cite{butler2002community}
% -> important factors: trust, reputation, identity \cite{ginsburg2004framework}
% other studies which suggest changes to improve community interaction/qualtity/sustainability
% -> help vampires, noobs, reputation collectors \cite{srba2016stack}
% -> qualtity solution suggestions \cite{srba2016stack}
% -> restrict openness of the community, not desirable (e.g. restrict number of questions to combat low-quality questions), will not be 100% efective\cite{srba2016stack}
% -> ''Improving Low Quality Stack Overflow Post Detection`` \cite{ponzanelli2014improving}, reduce review queue for moderators
% -> finding content abusers, yahoo answers \cite{kayes2015social}, other communities \cite{cheng2015antisocial}
% -> matching questions with answerers \cite{srba2016stack} (difficult questions -> expert users, easier questions -> answerers that know it but are not experts), dont overload experts, utilize capacities of the many nonexperts
% TODO look if moderation features are covered
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%intro .. se employes serveral features to engage/keep contributing users
%reputation
%badge system
%quality
StackExchange employs several features to engage users with the platform, for instance, the reputation system and the badge (award) system. These systems reward contributing users with achievements and encourage further contribution to the community. Both systems try to keep and increase the quality of the posts on the platform.
\textbf{Reputation}\\
Reputation plays an important role on StackExchange and indicates the credibility of a user, as well as a primary source of answers of high-quality \cite{movshovitz2013analysis}. Although the largest chunk of all questions is posted by low-reputation users, high-reputation users post more questions on average. To earn a high reputation a user has to invest a lot of effort and time into the community, for instance, asking good questions or providing useful answers to questions of others. Reputation is earned when a question or answer is upvoted by other users, or if an answer is accepted as the solution to a question by the question creator. \citeauthor{mamykina2011design} found that the reputation system of StackOverflow encourages users to compete productively \cite{mamykina2011design}. But not every user participates equally, and participation depends on the personality of the user \cite{bazelli2013personality}. \citeauthor{bazelli2013personality} showed that the top-reputation users on StackOverflow are more extroverted compared to users with less reputation. \citeauthor{movshovitz2013analysis} found that by analyzing the StackOverflow community network, experts can be reliably identified by their contribution within the first few months after their registration. Graph analysis also allowed the authors to find spamming users or users with other extreme behavior.
Although gaining reputation takes time and effort, users can take certain advantages to gain reputation faster by gaming the system \cite{bosu2013building, srba2016stack}. \citeauthor{bosu2013building} analyzed the reputation system and found five strategies to increase the reputation in a fast way: Firstly, answering questions with tags that have a small expertise density. This reduces competitiveness against other users and increases the chance of upvotes and answer acceptance. Secondly, questions should be answered promptly. The question asker will most likely accept the first arriving answer that solves the question. This is also supported by \cite{anderson2012discovering}. Thirdly, answering first also gives the user an advantage over other answerers. Fourthly, activity during off-peak hours reduces the competition from other users. Finally, contributing to diverse areas will also help in developing a higher reputation. This behavior may, however, decrease answer quality when users focus too much on reputation collection and disregard the quality of their posts\cite{srba2016stack}.
% DONE Discovering Value from Community Activity on Focused Question Answering Sites: A Case Study of Stack Overflow \cite{anderson2012discovering} accepted answer strongly depends on when answers arrive, considered not only the question and accepted answer but the set of answers to a question
% reputation
% DONE On the personality traits of stackoverflow users \cite{bazelli2013personality} analyzing personality traits, top reputated users are more extroverted than less reputated users
% DONE Building reputation in stackoverflow: an empirical investigation. \cite{bosu2013building} gaming the reputation system of SO, answering question with tags with lower expertise density, answering promptly, first one to answer, activity during off peak hours, contributing to diverse areas
% DONE Analysis of the reputation system and user contributions on a question answering website: Stackoverflow \cite{movshovitz2013analysis} about the reputation system, high reputation indicates primary source of answers and high quality, most questions asked by low reputation users but high reputation users post most questions on avg compared to low reputation users, effective finding of spam users and other extreme behaviors via graph analysis, predicting which users become influential longterm contributors, experts can be reliably identified based on the participation in the first few months after registration
% DONE Design Lessons from the Fastest Q&A Site in the West \cite{mamykina2011design} understanding SO success, 1) productive competition (gamification reputation), 2) founders were already experts on site the created (ensured success early on, founders involved in community not external), 3) meta page for discussion and voting on features (same mechanics as on SO page)
\textbf{Badges}\\
Complementary to the reputation system StackOverflow also employs a badge system\footref{stackoverflowbadges} to stimulate contributions by users \cite{cavusoglu2015can}. The goal of badges is to keep users engaged with the community \cite{li2012quantifying}. Therefore, badges are often used in a gamification setting where users contribute to the community and are rewarded for their behavior if it aligns with the requirements of the badges. Badges are visible in questions and answers as well as the profile page of the user and can be earned by performing certain actions. Badges are often seen as a steering mechanism by researchers \cite{yanovsky2019one, kusmierczyk2018causal, anderson2013steering}. Although users want to achieve badges and are therefore steered to perform certain actions, steering also occurs in the reputation system. However, badges allow a wider variety of goals, for instance, asking and answering questions, voting on questions and answers, or writing higher-quality answers.
Badges also work as a motivator for users \cite{anderson2013steering}. Users often put in non-trivial amounts of work and effort to achieve badges and so badges become powerful incentives. However, not all users are equal and therefore do not pursue badges in the same way \cite{yanovsky2019one}. Contrary to \cite{anderson2013steering}, \citeauthor{yanovsky2019one} \cite{yanovsky2019one} found that users do not necessarily increase their activity prior to achieving a badge followed by an immediate decrease in contribution thereafter but users behave differently based on their type of contribution. The authors found users can be categorized into three groups: Firstly, some users are not affected at all by the badge system and still contribute a lot to the community. Secondly, users increase their activity too before gaining a badge and keep their level of contribution afterward. Finally, users increase their activity before achieving a badge and return to their previous level of engagement thereafter.
Different badges also create status classes \cite{immorlica2015social}. The harder a badge can be earned by users, the more unique it is within the community and therefore the badge symbolizes some sort of status. Often rare badges are hard to achieve and take significant effort. For some users, depending on their type, this can be a huge motivator.
\citeauthor{kusmierczyk2018causal} found first-time badges play an important role in steering users \cite{kusmierczyk2018causal}. The steering effect only takes place if the benefit to the user is greater than the effort the user has to put in to obtain the badge. If the effort is greater the user will likely not pursue the badge and therefore the steering effect will not occur.
% badge
% DONE One Size Does Not Fit All: Badge Behavior in Q\&A Sites \cite{yanovsky2019one} # all abount badges, steering users, motivation; previous paper say that contribution increases before badge obtaining and decrases afterwards, but they find it depends on type of user: 1) users are not affected by badge system but still contribute much, 2) contribution increase ans stays the same after badge achievement 3) return to previous levels
% DONE Can gamification motivate voluntary contributions? The case of StackOverflow Q&A community \cite{cavusoglu2015can} stimulting users to contribute via badges
% DONE SOCIAL STATUS AND BADGE DESIGN \cite{immorlica2015social} about badges and how they create status classes, badges for every user and individual badges
% DONE Quantifying the impact of badges on user engagement in online Q&A communities \cite{li2012quantifying} maintain consistent engagement, gamification via badges
% DONE On the Causal Effect of Badges \cite{kusmierczyk2018causal} # all abount badges, steering users, motivation, first-time badges, first time badges steer user behavior if benefit greater then effort, otherwise no effect
% Quizz: Targeted Crowdsourcing with a Billion (Potential) Users \cite{ipeirotis2014quizz} many online comunities bysed on volutarty of users not paid workers
% DONE Steering user behavior with badges \cite{anderson2013steering} # all abount badges, steering users, motivation, user may put in non trivial amounts of work to achieve badges -> powerful incentives, badges used in multiple ways (steer users to ask/answer more questions, voting, etc.)
\subsection{Regulation}
Regulation evolves around the user actions and the content a community creates. It is required to steer the community and keep the community civil. Naturally, some users will not have the best intentions for the community in mind. These actions of such must be accounted for, and harmful actions must be dealt with. Otherwise, the community and its content will deteriorate.
\textbf{Content quality}\\
Quality is a concern in online communities. Platform moderators and admins want to keep a certain level of quality or even raise it. However, higher-quality posts take more time and effort than lower-quality posts. In the case of CQA platforms, this is an even bigger problem as higher-quality answers fight against fast responses. Despite that, StackOverflow also has a problem with low quality and effort questions and the subsequent unwelcoming answers and comments\footref{silge2019welcome}.
\citeauthor{lin2017better} investigated how growth affects a community\cite{lin2017better}. They looked at Reddit communities that were added to the default set of subscribed communities of every new user (defaulting) which lead to a huge influx of new users to these communities as a result. The authors found that contrary to expectations, the quality stays largely the same. The vote score dips shortly after defaulting but quickly recovers or even raises to higher levels than before. The complaints of low-quality content did not increase, and the language used in the community stayed the same. However, the community clustered around fewer posts than before defaulting. \citeauthor{srba2016stack} did a similar study on the StackOverflow community \cite{srba2016stack}. They found a similar pattern in the quality of posts. The quality of questions dipped momentarily due to the huge influx of new users. However, the quality did recover after 3 months.
\citeauthor{tausczik2011predicting} found reputation is linked to the perceived quality of posts in multiple ways \cite{tausczik2011predicting}. They suggest reputation could be used as an indicator of quality. Quality also depends on the type of platform. \citeauthor{lin2017better} showed that expert sites who charge fees, for instance, library reference services, have higher quality answers compared to free sites\cite{lin2017better}. Also, the higher the fee the higher the quality of the answers. However, free community sites outperform expert sites in terms of answer density and responsiveness.
\textbf{Content abuse}\\
\citeauthor{srba2016stack} identified 3 types of users causing the lowering of quality: \emph{Help Vampires} (these spend little to no effort to research their questions, which leads to many duplicates), \emph{Noobs} (they create mostly trivial questions), and \emph{Reputation Collectors}\cite{srba2016stack}. They try to gain reputation as fast as possible by methods described by \citeauthor{bosu2013building}\cite{bosu2013building} but often with no regard of what effects their behavior has on the community, for instance, lowering overall content quality, turning other users away from the platform, and encouraging the behavior of \emph{Help Vampires} and \emph{Noobs} even more.
Questions of \emph{Help Vampires} and \emph{Noobs} direct answerers away from much more demanding questions. On one hand, this leads to knowledgeable answerers answering questions for which they are overqualified to answer, and on the other hand to a lack of adequate quality answers for more difficult questions. \citeauthor{srba2016stack} suggest a system that tries to match questions with answerers that satisfy the knowledge requirement but are not grossly overqualified to answer the question. A system with this quality would prevent suggesting simple questions to overqualified answerers, and prevent an answer vacuum for questions with more advanced topics. This would ensure more optimal utilization of the answering capability of the community.
\textbf{Content moderation}\\
\citeauthor{srba2016stack} proposed some solutions to improve the quality problems. One suggestion is to restrict the openness of a community. This can be accomplished in different ways, for instance, introducing a posting limit for questions on a daily basis\cite{srba2016stack}. While this certainly limits the amount of low-quality posts, it does not eliminate the problem. Furthermore, this limitation would also hurt engaged users which would create a large volume of higher quality content. A much more intricate solution that adapts to user behavior would be required, otherwise, the limitation would hurt the community more than it improves.
\citeauthor{ponzanelli2014improving} performed a study where they looked at post quality on StackOverflow\cite{ponzanelli2014improving}. They aim to improve the automatic low-quality post detection system which is already in place and reduce the size of the review queue selected individuals have to go through. Their classifier improves by including popularity metrics of the user posting and the readability of the post itself. With these additional factors, they managed to reduce the amount of misclassified quality posts with only a minimal decrease of correctly classified low-quality posts. Their improvement to the classifier reduced the review queue size by 9\%.
% other studies which suggest changes to improve community interaction/qualtity/sustainability
% -> help vampires, noobs, reputation collectors \cite{srba2016stack}
% -> qualtity solution suggestions \cite{srba2016stack}
% -> restrict openness of the community, not desirable (e.g. restrict number of questions to combat low-quality questions), will not be 100% efective\cite{srba2016stack}
% -> ''Improving Low Quality Stack Overflow Post Detection`` \cite{ponzanelli2014improving}, reduce review queue for moderators
% -> finding content abusers, yahoo answers \cite{kayes2015social}, other communities \cite{cheng2015antisocial}
% -> matching questions with answerers \cite{srba2016stack} (difficult questions -> expert users, easier questions -> answerers that know it but are not experts), dont overload experts, utilize capacities of the many nonexperts
Another solution is to find content abusers (noobs, help vampires, etc.) directly. One approach is to add a reporting system to the community, however, a system of this kind is also driven by user inputs and therefore can be manipulated as well. This would lead to excluding users flagged as false positives and missing a portion of content abusers completely. A better approach is to systematically find these users by their behavior. \citeauthor{kayes2015social} describe a classifier which achieves an accuracy of 83\% on the \emph{Yahoo! Answers} platform \cite{kayes2015social}. The classifier is based on empirical data where they looked at historical user activity, report data, and which users were banned from the platform. From these statistics, they created the classifier which is able to distinguish between falsely and fairly banned users. \citeauthor{cheng2015antisocial} performed a similar study on antisocial behavior on various platforms. They too looked at historical data of users and their eventual bans as well as on their deleted posts rates. Their classifier achieved an accuracy of 80\%.
% quality
% DONE Predicting the perceived quality of online mathematics contributions from users' reputations \cite{tausczik2011predicting} about mathoverflow and quality
% DONE Predictors of Answer Quality in Online Q&A Sites cite{harper2008predictors} 1) shows that fee or expert sites are better than open qa sites (greater fee better answers), 2) big communty sites like Yahoo! Answers outperform sites which depend on experts (e.g. library refernce services) (higher answer diversity and responsiveness)
% DONE Better When It Was Smaller? Community Content and Behavior After Massive Growth \cite{lin2017better}, defaulting of subreddit, quality remains high, dip in upvotes directly after defaulting but recover quickly and get even higher than before, complaints about low-quality content do not increase, language stays the same, however community clusters among fewer posts than before defaulting
% lowering content quality (Gorbatai 2011) %TODO read and add to list of notizen
% other
% DONE Discovering Value from Community Activity on Focused Question Answering Sites: A Case Study of Stack Overflow \cite{anderson2012discovering} accepted answer strongly depends on when answers arrive, considered not only the question and accepted answer but the set of answers to a question
% DONE Quizz: Targeted Crowdsourcing with a Billion (Potential) Users \cite{ipeirotis2014quizz} many online comunities based on volutarty of users not paid workers
% DONE Design Lessons from the Fastest Q&A Site in the West \cite{mamykina2011design} understanding SO success, 1) productive competition (gamification reputation), 2) founders were already experts on site the created (ensured success early on, founders involved in community not external), 3) meta page for discussion and voting on features (same mechanics as on SO page)
% DONE How Do Programmers Ask and Answer Questions on the Web? \cite{treude2011programmers} qa sites very effective at code review and conceptual questions
% DONE The role of knowledge in software development \cite{robillard1999role} people have different areas of knowledge and expertise
% Finding the Right Facts in the Crowd: Factoid Question Answering over Social Media \cite{bian2008finding}, about Yahoo! Answers, finding factual answers by using available data on user interaction
% No Country for Old Members: User Lifecycle and Linguistic Change in Online Communities \cite{danescu2013no}
% DONE Non-public and public online community participation: Needs, attitudes and behavior \cite{nonnecke2006non} about lurking, many programmers do that probably, not even registering, lurking not a bad behavior but observing, lurkers are more introverted, passive behavior, less optimistic and positive than posters, prviously lurking was thought of free riding, not contributing, taking not giving to comunity, important for getting to know a community, better integration when joining
% A comprehensive survey and classification of approaches for community question answering \cite{srba2016comprehensive}, meta study on papers published between 2005 and 2014
\section{Analysis}
When analyzing a community, one typically finds 2 types of data: text, and metadata. Metadata is relatively easy to quantify, while text is much more complicated and intricate to quantify. Text contains a large variety of features and depending on the research in question, researchers have to decide which features they want to include. This thesis investigates the (un-)friendliness in the communication between users and will therefore perform sentiment analysis on the texts. The next section will go into more detail on sentiment analysis. After the data (text and metadata) is quantified, one often wants to know how the data has changed over time. The trend analysis section follows the sentiment analysis section.
%
%assign values to text
%analyze trend
% sentiment analyse: es gibt 10-15 methoden,
% alle sentiment methoden + vader
\subsection{Sentiment analysis}
Researchers put forth many tools for sentiment analysis over the years. Each tool has its advantages and drawbacks and there is not a silver bullet solution that fits all research questions. Researchers have to choose a tool that best fits their needs and they need to be aware of the drawbacks of their choice. Sentiment analysis poses three important challenges:
\begin{itemize}
\item Coverage: detecting as many features as possible from a given piece of text
\item Weighting: assigning one or multiple values (value range and granularity) to detected features
\item Creation: creating and maintaining a sentiment analysis tool is a time and labor-intensive process
\end{itemize}
% many different methods
%
% have to choose tool depending on task
% beware of the drawbacks
%challenges (vader)
% - coverage (e.g. of lexical features, important in mircoblog texts)
% - sentiment intensity (some of the following tools ignore intensity completly (just -1, or 1)
% - creating a human-validated gold standard lexicon is very time consuming/labor intensive, with sentiment valence scores, feature detection and context awareness,
In general, sentiment analysis tools can be grouped into two categories: handcrafted and automated (machine learning).
%distinction into 2 groups: handcrafted and automated tools
% polarity-based -> binary
% valence-base -> continuous
%%%%% handcrafted - TODO order by sofistication, sentiwordnet last
%lexicon generation very time consuming
%generally fast sentiment computation
%realtively easy to update (added words, ...)
%nachvolliziehbare results
\textbf{Handcrafted Approches}\\
Creating hand-crafted tools is often a huge undertaking. They depend on a hand-crafted lexicon (gold standard, human-curated lexicons), which maps features of a text to a value. In the simplest sense, these just map a word to a binary value -1 (negative word) or 1 (positive word). However, most tools use a more complex lexicon to capture more features of a piece of text. By design, they allow a fast computation of the sentiment of a given piece of text. Also, hand-crafted lexicons are easy to update and extend. Furthermore, hand-crafted tools produce easily comprehensible results. The following paragraphs explain some of the analysis tools in this category.
%liwc (Linguistic Inquiry and Word Count) \cite{pennebaker2001linguistic,pennebakerdevelopment}, 2001 %TODO refs wrong?
% - well verified
% - ignores acronyms, initialisms, emoticons, or slang, which are known to be important for sentiment analysis of social text (vader)
% - cannot recognise sentiment intensity (all word have an equal weight) (vader)
% - ca 4500 words (uptodate?), ca 400 pos words, ca 500 neg words, lexicon proprietary (vader)
% - TODO list some application examples
% ...
Linguistic Inquiry and Word Count (LIWC) \cite{pennebaker2001linguistic,pennebakerdevelopment} is one of the more popular tools. Due to its widespread usage, LIWC is well verified, both internally and externally. Its lexicon consists of about 6,400 words where words are categorized into one or more of the 76 defined categories \cite{pennebaker2015development}. 620 words have a positive and 744 words have a negative emotion. Examples for positive words are: love, nice, sweet; examples for negative words are: hurt, ugly, nasty. LIWC also has some drawbacks, for instance, it does not capture acronyms, emoticons, or slang words. Furthermore, LIWC's lexicon uses a polarity-based approach, meaning that it cannot distinguish between the sentences ''This pizza is good`` and ''This pizza is excellent``\cite{hutto2014vader}. \emph Good and \emph excellent are both in the category of positive emotion but LIWC does not distinguish between single words in the same category.
%General Inquirer (GI) \cite{stone1966general} 1966 TODO ref wrong?
% - 11k words, 1900 pos, 2300 neg, all approx (vader)
% - very old (1966), continuously refined, still in use (vader)
% - misses lexical feature detection (acronyms, ...) and sentiment intensity (vader)
General Inquirer (GI)\cite{stone1966general} is one of the oldest sentiment tools still in use. It was originally designed in 1966 and has been continuously refined and now consists of about 11000 words where 1900 positively rated words and 2300 negatively rated words. Like LIWC, GI uses a polarity-based lexicon and therefore is not able to capture sentiment intensity\cite{hutto2014vader}. Also, GI does not recognize lexical features, such as acronyms, initialisms, etc.
%Hu-Liu04 \cite{hu2004mining,liu2005opinion}, 2004
% - focuses on opinion mining, find features in multiple texts (eg reviews) and rate the opinion about the feature, pos/neg binary classification (hu2004mining)
% - does not text summarize opinions but summarizes ratings (hu2004mining)
% - 6800 words, 2000 pos, 4800 neg, all approx values (vader)
% - better suited for social media text, misses emoticons and acronyms/initialisms (vader)
% - bootstrapped from wordnet (wellknown english lexical database) (vader, hu2004mining)
%TODO refs
Hu-Liu04 \cite{hu2004mining,liu2005opinion} is a opinion mining tool. It searches for features in multiple pieces of text, for instance, product reviews, and rates the opinion of the feature by using a binary classification\cite{hu2004mining}. Crucially Hu-Liu04 does not summarize the texts but summarizes ratings of the opinions about features mentioned in the texts. Hu-Liu04 was bootstrapped from WordNet\cite{hu2004mining} and then extended further. It now uses a lexicon consisting of about 6800 words where 2000 words have a positive sentiment and 4800 words have a negative sentiment attached\cite{hutto2014vader}. This tool is, by design, better suited for social media texts, although it also misses emoticons, acronyms, and initialisms.
%SenticNet \cite{cambria2010senticnet} 2010
% - concept-level opinion and sentiment analysis tool (vader)
% - sentic mining: combination of AI and Semantic Web (vader, senticnet)
% - graphmining and dimensionality reduction (vader, senticnet)
% - uses conceptnet: directed graph of concepts and relations (TODO refernce
% - lexicon: 14250 common-sense concepts, with polarity scores [-1,1] continuous, and many other values (vader)
% - TODO list some concepts (vader) or maybe not
SenticNet \cite{cambria2010senticnet} is also an opinion mining tool but it focuses on concept-level opinions. SenticNet is based on a paradigm called \emph{Sentic Mining} which uses a combination of concepts from artificial intelligence and the Semantic Web. More specifically, it uses graph mining and dimensionality reduction. SenticNets lexicon consists of about 14250 common-sense concepts which have ratings on many scales of which one is a polarity score with a continuous range from -1 to 1\cite{hutto2014vader}. This continuous range of polarity scores enables SenticNet to be sentiment-intensity aware.
%ANEW (Affective Norms for English Words) \cite{bradley1999affective} 1999
% - tool introducted to compare and standardize research
% - lexicon: 1034 words, ranked by pleasure, arousal, and dominance (vader, bradley1999affective)
% - words get value 1-9 (neg-pos, continuous), 5 neutral (TODO maybe list word examples with associated value) (vader, bradley1999affective)
% - therefore captures sentiement intensity (vader, bradley1999affective)
% - misses lexical features (e.g. acronyms, ...) (vader)
Affective Norms for English Words (ANEW) \cite{bradley1999affective} is a sentiment analysis tool and was introduced to standardize research and offer a way to compare research. Its lexicon is fairly small and consists of only 1034 words which are ranked pleasure, arousal, and dominance. However, ANEW uses a continuous scale from 1 to 9 where 1 represents the negative end, 9 represents the positive end, and 5 is considered neutral. With this design, ANEW is able to capture sentiment intensity. However, ANEW still misses lexical features, for instance, acronyms\cite{hutto2014vader}.
%wordnet \cite{miller1998wordnet} 1998, TODO maybe exlcude or just mention briefly in sentiwordnet
% - well-known English lexical database (vader)
% - group synonyms (synsets) together (vader)
% -
WordNet analyzes text with a dictionary that contains lexical concepts \cite{miller1995wordnet,miller1998wordnet}. Each lexical concept contains multiple words which are synonyms, called synsets. These synsets are then linked by semantic relations. With this lexicon, text can be queried in multiple different ways.
%sentiwordnet \cite{baccianella2010sentiwordnet}
% - extension of wordnet (vader, baccianella2010sentiwordnet)
% - 147k synsets (vader),
% - with 3 values for pos neu neg, sum of synset (pos neu neg) = 1, range 0-1 continuous (vader,baccianella2010sentiwordnet)
% - synset values calc by complex mix of semi supervised algorithms (properagtion methods and classifiers) -> not a gold standard lexicon (vader, baccianella2010sentiwordnet)
% - lexicon very noisy, most synset not pos or neg but mix (vader)
% - misses lexical features (vader)
SentiWordNet \cite{baccianella2010sentiwordnet} is an extension of WordNet and adds sentiment scores to the synsets. Its lexicon consists of about 147000 synsets, each having 3 values (positive, neutral, negative) attached to them. Each value has a continuous range from 0 to 1 and the sum of these 3 values is set to be 1. The values of each synset are calculated by a mix of semi-supervised algorithms, mostly propagation, and classifiers. This distinguishes SentiWordNet from previously explained sentiment tools, where the lexica are exclusively created by humans (except for simple mathematical operations, for instance, averaging of values). Therefore, SentiWordNet's lexicon is not considered to be a human-curated gold standard. Furthermore, the lexicon is very noisy and most of the synsets are neither positive nor negative but a mix of both\cite{hutto2014vader}. Moreover, SentiWordNet misses lexical features, for instance, acronyms, initialisms, and emoticons.
%Word-Sense Disambiguation (WSD) \cite{akkaya2009subjectivity}, 2009
% - TODO
% - not a sentiment analysis tool per se but can be combined with sentiement analysis tool to distinuish multiple meaning for a word (vader, akkaya2009subjectivity)
% - a word can have multiple meanings, pos neu neg depending on context (vader,akkaya2009subjectivity)
% - derive meaning from context -> disambiguation (vader, akkaya2009subjectivity)
% - distinguish subjective and objective word usage, sentences can only contain negative words used in object ways -> sentence not negative, TODO example sentence (akkaya2009subjectivity)
Word-Sense Disambiguation (WSD)\cite{akkaya2009subjectivity} is not a sentiment analysis tool per se, however, it can be used to enhance others. In languages certain words have different meanings depending on the context they are used in. When sentiment tools, which do not use WSD, analyze a piece of text, some words which have different meanings depending on the context may skew the resulting sentiment. Some words can even change from positive to negative or vice versa depending on the context. WSD tries to distinguish between subjective and objective word usage. For example \emph{The party was great.} and \emph{The party lost many votes}. Although \emph party is written exactly the same it has 2 completely different meanings. Depending on the context, ambiguous words can have different sentiments.
%%%%% automated (machine learning)
%often require large training sets, compare to creating a lexicon (vader)
%training data must represent as many features as possible, otherwise feature is not learned, often not the case (vader)
%training data should be unbiased, or else wrong learning (NOT VADER)
%very cpu and memory intensive, slow, compare to lexicon-based (vader)
%derived features not nachvollziehbar as a human (black-box) (vader)
%generalization problem (vader)
%updateing (extend/modify) hard (e.g. new domain) (vader)
\textbf{Machine Learning Approches}\\
Because handcrafting sentiment analysis requires a lot of effort, researchers turned to approaches that offload the labor-intensive part to machine learning (ML). However, this results in a new challenge, namely: gathering a \emph good data set to feed the machine learning algorithms for training. Firstly, \emph good data set needs to represent as many features as possible, otherwise, the algorithm will not recognize it. Secondly, the data set has to be unbiased and representative for all the data of which the data set is a part of. The data set has to represent each feature in an appropriate amount, otherwise, the algorithms may discriminate a feature in favor of other more represented features. These requirements are hard to fulfill and often they are not\cite{hutto2014vader}. After a data set is acquired, a model has to be learned by the ML algorithm, which is, depending on the complexity of the algorithm, a very computational-intensive and memory-intensive process. After training is completed, the algorithm can predict sentiment values for new pieces of text, which it has never seen before. However, due to the nature of this approach, the results cannot be comprehended by humans easily if at all. ML approaches also suffer from a generalization problem and therefore cannot be transferred to other domains without accepting a bad performance, or updating the training data set to fit the new domain. Updating (extending or modifing) the model also requires complete retraining from scratch. These drawbacks make ML algorithms useful only in narrow situations where changes are not required and the training data is static and unbiased.
% naive bayes
% - simple (vader)
% - assumption: feature probabilties are indepenend of each other (vader)
The Naive Bayes (NB) classifier is one of the simplest ML algorithms. It uses Bayesian probability to classify samples. This requires the assumption that the probabilities of the features are independent of one another, which often they are not because languages have certain structures of features.
% Maximum Entropy
% - exponential model + logistic regression (vader)
% - feature weighting through not assuming indepenence as in naive bayes (vader)
Maximum Entropy (ME) is a more sophisticated algorithm. It uses an exponential model and logistic regression. It distinguishes itself from NB by not assuming conditional independence of features. It also supports weighting of features by using the entropy of features.
%svm
%- mathemtical anspruchsvoll (vader)
%- seperate datapoints using hyper planes (vader)
%- long training period (other methods do not need training at all because lexica) (vader)
Support Vector Machines (SVM) uses a different approach. SVMs put data points in an $n$-dimentional space and differentiate them with hyperplanes ($n-1$ dimensional planes), so data points fall in 1 of the 2 halves of the space divided by the hyperplane. This approach is usually very memory and computation-intensive as each data point is represented by an $n$-dimentional vector where $n$ denotes the number of trained features.
%generall blyabla, transition to vader
In general, ML approaches do not provide an improvement over hand-crafted lexicon approaches as they only shift the time-intensive process to training data set collections. Furthermore, lexicon-based approaches seem to have progressed further in terms of coverage and feature weighting. However, many tools are not specifically tailored to social media text analysis and leak in coverage of feature detection.
%vader (Valence Aware Dictionary for sEntiment Reasoning)(grob) \cite{hutto2014vader}
% - 2014
% - detects acyrnoms, ...
% - sentiment intensity
% - not just 1 and -1 for pos and neg but value in a range
% - context awareness
% - disabliguation of words if they have multiple meanings (contextual meaning)
\textbf{VADER}\\
This shortcoming was addressed by \citeauthor{hutto2014vader} who introduced a new sentiment analysis tool: Valence Aware Dictionary for sEntiment Reasoning (VADER)\cite{hutto2014vader}. \citeauthor{hutto2014vader} acknowledged the problems that many tools have and designed VADER to leverage the shortcomings. Their aim was to introduce a tool that works well in the social media domain, provides good coverage of features occurring in the social media domain (acronyms, initialisms, slang, etc.), and is able to work with online streams (live processing) of texts. VADER is also able to distinguish between different meanings of words (WSD) and it is able to take sentiment intensity into account. These properties make VADER an excellent choice when analyzing sentiment in the social media domain.
%The authors used a lexicon based approach as performance was one of the most important reuqirements.
%general
%dep on sentiment lexicons, more info in vader 2.1 Sentiment Lexicons
%vader not binary (pos, neg) but 3 categories
% its
% ursprüngliches paper ITS, wie hat man das früher (davor) gemacht
\subsection{Trend analysis}
When introducing a change to a system (experiment), one often wants to know whether the intervention achieves its intended purpose. This leads to 3 possible outcomes: a) the intervention shows an effect and the system changes in the desired way, b) the intervention shows an effect and the system changes in an undesired way, or c) the system did not react at all to the change. There are multiple ways to determine which of these outcomes occur. To analyze the behavior of the system, data from before and after the intervention as well as the nature of the intervention has to be acquired. The are multiple ways to run such an experiment and one has to choose which type of experiment fits best. There are 2 categories of approaches: actively creating an experiment where one design the experiment before it is executed (for example randomized control trials in medical fields), or using existing data of an experiment that was not designed beforehand, or where setting up a designed experiment is not possible (quasi-experiment).
As this thesis investigates a change that has already been implemented by another party, this thesis covers quasi-experiments. A tool that is often used for this purpose is an \emph{Interrupted Time Series} (ITS) analysis. The ITS analysis is a form of segmented regression analysis, where data from before, after, and during the intervention is regressed with separate line segements\cite{mcdowall2019interrupted}. ITS requires data at (regular) intervals from before and after the intervention (time series). The interrupt signifies the intervention and the time of when it occurred must be known. The intervention can be at a single point in time or it can be stretched out over a certain time span. This property must also be known to take it into account when designing the regression. Also, as the data is acquired from a quasi-experiment, it may be baised\cite{bernal2017interrupted}, for example, seasonality, time-varying confounders (for example, a change in measuring data), variance in the number of single observations grouped together in an interval measurement, etc. These biases need to be addressed if present. Seasonality can be accounted for by subtracting the average value of each of the months in successive years (i.e. subtract the average value of all Januaries in the data set from the values in Januaries).
%\begin{lstlisting}
% deseasonalized = datasample - average(dataSamplesInMonth(month(datasample)))
%\end{lstlisting}
This removes the differences between different months of the same year thereby filtering out the effect of seasonality. The variance in data density per interval (data samples in an interval) can be addressed by using each single data point in the regression instead of an average.
%\cite{mcdowall2019interrupted} book
%\citeauthor{bernal2017interrupted} paper tutorial
%widely used in medical fields where randomized controll trials are not an option/observational data already exists
% -> based on segmented regression, do regression on pieces of data, then stitch together
% -> its inferior to rct but better than nothing
% -> shortcomming need to be addressed
% -> requires (before and after) data of interest at (regular) intervals (TS in its)
% -> iterrupted from an intervention during the observatios at a known point in time
% -> intervention can be a single point in time or gradual roll out
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%paper links bekommen:
%tutorial: Bernal et al. \cite{bernal2017interrupted}
%You Cant Stay Here: The Efficacy of Reddits 2015 Ban Examined Through Hate Speech \cite{chandrasekharan2017you}
% -> reddit hate community ban: change = ban
% -> todo
%literature
% Tracing Community Genealogy: How New Communities Emerge from the Old \cite{tan2018tracing}
% On the personality traits of stackoverflow users \cite{bazelli2013personality} analyzing personality traits, top reputated users are more extroverted than less reputated users
% -> gute vorlage http://softwareprocess.es/pubs/bazelli2013ICSMERA-Personality.pdf
% <- One-day flies on StackOverflow \cite{slag2015one}, 1 contribution during whole registration, only user with 6 month of registration
% -> [1] Discovering Value from Community Activity on Focused Question Answering Sites: A Case Study of Stack Overflow \cite{anderson2012discovering} accepted answer strongly depends on when answers arrive, considered not only the question and accepted answer but the set of answers to a question
% -> [23] Predicting the perceived quality of online mathematics contributions from users' reputations \cite{tausczik2011predicting} about mathoverflow and quality
% -> [4] Predictors of Answer Quality in Online Q&A Sites cite{harper2008predictors} 1) shows that fee or expert sites are better than open qa sites (greater fee better answers), 2) big communty sites like Yahoo! Answers out perform sites which depend on experts (e.g. library refernce services) (higher answer diversity and responsiveness)
% -> todo done
% -> todo done
% -> contains generic refernces to boost ref count
% -> [5] Building reputation in stackoverflow: an empirical investigation. \cite{bosu2013building} gaming the reputation system of SO, answering question with tags with lower expertise density, answering promptly, first one to answer, activity during off peak hours, contributing to diverse areas
% -> [8] Analysis of the reputation system and user contributions on a question answering website: Stackoverflow \cite{movshovitz2013analysis} about the reputation system, high reputation indicates primary source of answers and high quality, most questions asked by low reputation users but high reputation users post most questions on avg compared to low reputation users, effective finding of spam users and other extreme behaviors via graph analysis, predicting which users become influential longterm contributors, experts can be reliably identified based on the participation in the first few months after registration
% -> todo done
% -> [1] Design Lessons from the Fastest Q&A Site in the West \cite{mamykina2011design} understanding SO success, 1) productive competition (gamification reputation), 2) founders were already experts on site the created (ensured success early on, founders involved in community not external), 3) meta page for discussion and voting on features (same mechanics as on SO page)
% -> [2] How Do Programmers Ask and Answer Questions on the Web? \cite{treude2011programmers} qa sites very effective at code review and conceptual questions
% -> [10] The role of knowledge in software development \cite{robillard1999role} people have different areas of knowledge and expertise
% -> [3] Finding the Right Facts in the Crowd: Factoid Question Answering over Social Media \cite{bian2008finding}, about Yahoo! Answers, finding factual answers by using available data on user interaction
% No Country for Old Members: User Lifecycle and Linguistic Change in Online Communities \cite{danescu2013no}
% Better When It Was Smaller? Community Content and Behavior After Massive Growth \cite{lin2017better}, defaulting of subreddit, quality remains high, dip in upvotes directly after defaulting but recover quickly and get even higher than before, complaints about low-quality content do not increase, language stays the same, however community clusters among fewer posts than before defaulting
% -> breaching community norms (kraut 2012)
% starting a community: critical mass, enought users to attract other users who also create content
% attracting new users: attract new users to replace leaving ones, new users should be skilled and motivated to contribute (chanllange, depends on community some accept everyone others need specific skills (Eg OSS) or qualitities (eg illness for medical suppport groupgs, etc), mew users less commitment thatn old ones, newcommers may not behave according to community standard as they dont now them
% encoraging commitment: willingness to stay in community (increases statisfaction, les likely to leave, better performance, more contribution), harder than in companies with employee contracts, contrast to OSS (no contract, voluntarity), greter competition from other communities in contrast to rl where options are limimted by location and distance
% encouraging contribution: online communities need contributions by users (not lurking), content is foundation of community, contributions by users follows power law (usally, also confirmed in my results)
% regualting behavior: maintain a funtioning community, prevent troll, inappropiate behavior, limit damage if it occurs, ease of entry & exit -> high turnover
% -> lowering content quality (Gorbatai 2011) %TODO read and add to list of notizen
% Eliciting New Wikipedia Users Interests via Automatically Mined Questionnaires: For a Warm Welcome, Not a Cold Start \cite{yazdanian2019eliciting}
% -> cold start recommender system problem for recommending newcommers articles to read and get a feeling for how to write articles; similar to SO because new commers don't know the rules so well; familiarize newcommers with how things work on the site, onboarding
% Do organizational socialization tactics influence newcomer embeddedness and turnover? \cite{allen2006organizational} #newcommers to organizations, actively embedding newcomers into organization, shows connection between socialaization and turnover (leaving the organization)
% -> todo
% We Don't Do That Here: How Collaborative Editing with Mentors Improves Engagement in Social Q\&A Communities \cite{ford2018we} # mentoring new commers questions (before posting), 1 month experiment, collaborative experiment with stackoverflow team, novices got a choice upon submitting a question whether or not the want feedback from a mentor regaurding the question, if so redirect to help room where mentor reviews question and suggests changes to question, mentored questions significatly better than non-mentored ones, higher scores fewer offtopic or poor questions, novices more comfortable with mentor reviewed questions
% -> todo
% -> Non-public and public online community participation: Needs, attitudes and behavior \cite{nonnecke2006non} about lurking, many programmers do that probably, not even registering, lurking not a bad behavior but observing, lurkers are more introverted, passive behavior, less optimistic and positive than posters, prviously lurking was thought of free riding, not contributing, taking not giving to comunity, important for getting to know a community, better integration when joining
% -> Social Barriers Faced by Newcomers Placing Their First Contribution in Open Source Software Projects\cite{steinmacher2015social} onboarding in open source software projects, difficulties for newcomers, newcommers often on their own, barriers when 1st contributing to a project,
% -> Paradise Unplugged: Identifying Barriers for Female Participation on Stack Overflow \cite{ford2016paradise} gender gap, females only 5%, contribution barriers, found 5 gender specific (women) barriers among 14 barrier in total, barriers also affect groups like industry programmers
% -> Community-based production of open-source software: What do we know about the developers who participate? \cite{david2008community} only 5% women contribute to OSS
% -> https://insights.stackoverflow.com/survey/2019: 7.9% women, increase since 2015: 5.8%
% -> Gender, Representation and Online Participation: A Quantitative Study \cite{vasilescu2014gender} investigation on minorities (eg women), under representation of minorities
% -> Why So Few? Women in Science, Technology, Engineering, and Mathematics. \cite{hill2010so} women only 20 percent of bachelor degrees
% -> Women and science careers: leaky pipeline or gender filter? \cite{clark2005women} underrepresentation in STEM
% Stack Overflow Isn't Very Welcoming: It's Time for That to Change \cite{hanlon2018stack} # passt sehr gut in die story, effort to make site more welcoming
% -> marginalized group feel SO is a hostile and elitist place, new coders, women, people of color, etc
% -> admitting of problem that have not been addressed (enough), mixed messages (expert site or for everyone), to little guidance for new users, pecking on new users who dont know all little things on what (not) to do (no plz and thx, low quality question -> low qualtity answer -> comments about support for low quality) or bad english, previous attempts to improve welcoming, Summer of Love (https://stackoverflow.blog/2012/07/20/kicking-off-the-summer-of-love/), The War of the Closes (https://stackoverflow.blog/2013/06/25/the-war-of-the-closes/), The NEW new “Be Nice” Policy (“Code of Conduct”) — Updated with your feedback (https://meta.stackexchange.com/questions/240839/the-new-new-be-nice-policy-code-of-conduct-updated-with-your-feedback), Mentorship Research Project - Results + Wrap-Up (https://meta.stackoverflow.com/questions/357198/mentorship-research-project-results-wrap-up?noredirect=1&lq=1) TODO also refer paper about that here, removal condesting and sarcastic comments, ideas about beginner ask page (TODO already implemted?), dont judge users for not knowing things (e.g. posting duplicates)
% Rolling out the Welcome Wagon: June Update \cite{friend2018rolling} “Ask a Question Wizard” prototype, reduce exclusion (negative feelings, expectations and experiences), improve inclusion (learn from other communities facing similar problems), classification of abusive and unwelcoming comments
% Welcome Wagon: Classifying Comments on Stack Overflow \cite{silge2019welcome} #all about comments, effort to make site more welcoming, staff internal rating of comments (fine, unwelcoming, abusive, 57 raters, 13742 ratings, 3992 comments)
% One Size Does Not Fit All: Badge Behavior in Q\&A Sites \cite{yanovsky2019one} # all abount badges, steering users, motivation; previous paper say that contribution increases before badge obtaining and decrases afterwards, but they find it depends on type of user: 1) users are not affected by badge system but still contribute much, 2) contribution increase ans stays the same after badge achievement 3) return to previous levels
% -> todo
% -> []Can gamification motivate voluntary contributions? The case of StackOverflow Q&A community \cite{cavusoglu2015can} stimulting users to contribute via badges
% -> []SOCIAL STATUS AND BADGE DESIGN \cite{immorlica2015social} about badges and how they create status classes, badges for every user and individual badges
% -> []Quantifying the impact of badges on user engagement in online Q&A communities \cite{li2012quantifying} maintain consistent engagement, gamification via badges
% On the Causal Effect of Badges \cite{kusmierczyk2018causal} # all abount badges, steering users, motivation, first-time badges, first time badges steer user behavior if benefit greater then effort, otherwise no effect
% -> [] Quizz: Targeted Crowdsourcing with a Billion (Potential) Users \cite{ipeirotis2014quizz} many online comunities bysed on volutarty of users not paid workers
% -> todo
% Steering user behavior with badges \cite{anderson2013steering} # all abount badges, steering users, motivation, user may put in non trivial amounts of work to achieve badges -> powerful incentives, badges used in multiple ways (steer users to ask/answer more questions, voting, etc.)
% -> todo
% A comprehensive survey and classification of approaches for community question answering \cite{srba2016comprehensive}, meta study on papers published between 2005 and 2014
%literatur analyse todo
%paper lesen und sachen rausschreiben; keywords ...
%struktur neu machen
%schreiben
% old
%structure
%- various research on collaborative online communities, yahoo answers, stackoverflow/exchange, quora, wikipedia, ...
% - A comprehensive survey and classification of approaches for community question answering \cite{srba2016comprehensive} # good description of SO
% - Design Lessons from the Fastest Q&A Site in the West \cite{mamykina2011design} understanding SO success
%- maintaining a community:
% - onboarding of newcomers
% - keeping users on the platform
%- onboarding problem e.g. wikipedia, stackexchange
% - getting users to stay and contribute to the site
% - One-day flies on StackOverflow \cite{slag2015one}
% - Eliciting New Wikipedia Users Interests via Automatically Mined Questionnaires: For a Warm Welcome, Not a Cold Start \cite{yazdanian2019eliciting}
% -> cold start recommender system problem for recommending newcommers arictles to read and get a feeling for how to write articles; similar to SO because new commers
% - incentives for new users via reputation
% - gaming the system: Building reputation in stackoverflow: an empirical investigation. \cite{bosu2013building} gaming the reputation system of SO
% - prevent 1 day flies & keep new users engaged: Analysis of the reputation system and user contributions on a question answering website: Stackoverflow \cite{movshovitz2013analysis} about the reputation system
% - badges
% - One Size Does Not Fit All: Badge Behavior in Q\&A Sites \cite{yanovsky2019one} # all about badges, steering users, motivation
% -> Can gamification motivate voluntary contributions? The case of StackOverflow Q&A community \cite{cavusoglu2015can} stimulting users to contribute via badges
% -> SOCIAL STATUS AND BADGE DESIGN \cite{immorlica2015social} about badges and how they create status classes, badges for every user and individual badges
% % -> Quantifying the impact of badges on user engagement in online Q&A communities \cite{li2012quantifying} maintain consistent engagement, gamification via badges
% - On the Causal Effect of Badges \cite{kusmierczyk2018causal} # all abount badges, steering users, motivation
% - Steering user behavior with badges \cite{anderson2013steering} # all abount badges, steering users, motivation
% - newcomers socialization, experienced users as models/mentors, positive feedback to newcomers
% - Do organizational socialization tactics influence newcomer embeddedness and turnover? \cite{allen2006organizational} #newcommers to organizations
% - We Don't Do That Here: How Collaborative Editing with Mentors Improves Engagement in Social Q\&A Communities \cite{ford2018we} # mentoring newcomers questions (before posting), 1 month experiment
% - Stack Overflow Isn't Very Welcoming: It's Time for That to Change \cite{hanlon2018stack} # passt sehr gut in die story, effort to make site more welcoming
% - Welcome Wagon: Classifying Comments on Stack Overflow \cite{silge2019welcome} #all about comment, effort to make site more welcoming
%
%- quality:
% - Predictors of Answer Quality in Online Q&A Sites cite{harper2008predictors} shows that open qa sites are better than paywall or expert sites
% - Predicting the perceived quality of online mathematics contributions from users' reputations \cite{tausczik2011predicting} about mathoverflow and quality
%
%- quasi experiments
% - stackexchange change
% - You Cant Stay Here: The Efficacy of Reddits 2015 Ban Examined Through Hate Speech \cite{chandrasekharan2017you}
% -> reddit hate community ban: change = ban
%reminder
% write in aspect of new users
% -> getting users on board (community guide lines)
% -> incentives for new users via reputation (maybe batches (do research on that))
% -> gaming the system
% -> prevent 1 day flies
% -> keep new users engaged
% 2 problems: onboarding and keeping users active (eg badges)