1 minute read

College of Computing & Informatics

Palash Pandey

Advertisement

College of Computing & Informatics Data Science

Faculty Mentor: Dr. Erjia Yan

Information Science

Analyzing gender bias in peer review using natural language processing

Publication and knowledge sharing are at the core of scientific communication and so the decisions made by conferences and journals are non-trivial. Publication venues have profound impact on attention and exposure a work gets and thus it is crucial that processes of publication be as unbiased as possible. There have been some studies that analyze available data quantitively to identify bias but most of these studies do not consider quality of papers themselves but rather rely on authors’ attributes. These analyses can be flawed because of huge variance in quality of work that journals receive and making it difficult to generalize results as substantial evidence for bias. We are using a peer reviewed dataset of reviews and analyzing individual reviews for their sentiments. We assign sentiment scores to reviews and these scores act as normalizing factor for acceptance/ rejection decisions. Our objective is to identify if more favorably reviewed papers, i.e., papers with high sentiment scores, are more likely to be accepted. Results show that overall, there is noticeable difference of sentiment for accepted and rejected papers. We find that there is no sentiment difference between accepted papers whose lead author is male and female.

This article is from: