{"subscriber":false,"subscribedOffers":{}} Explicit Bias Toward High-Income-Country Research: A Randomized, Blinded, Crossover Experiment Of English Clinicians | Health Affairs

Research Article

Global Health Policy

Explicit Bias Toward High-Income-Country Research: A Randomized, Blinded, Crossover Experiment Of English Clinicians

Affiliations
  1. Matthew Harris ([email protected]) is a clinical senior lecturer in public health at the Institute of Global Health Innovation, Imperial College London, in the United Kingdom.
  2. Joachim Marti is a lecturer in health economics at the Institute of Global Health Innovation, Imperial College London.
  3. Hillary Watt is a statistician in the Department of Primary Care and Public Health, Imperial College London.
  4. Yasser Bhatti is a research fellow in frugal innovation, Institute of Global Health Innovation, Imperial College London.
  5. James Macinko is a professor in the Fielding School of Public Health, University of California, Los Angeles.
  6. Ara W. Darzi is director of the Institute for Global Health Innovation, Imperial College London.
PUBLISHED:Free Accesshttps://doi.org/10.1377/hlthaff.2017.0773

Abstract

Unconscious bias may interfere with the interpretation of research from some settings, particularly from lower-income countries. Most studies of this phenomenon have relied on indirect outcomes such as article citation counts and publication rates; few have addressed or proven the effect of unconscious bias in evidence interpretation. In this randomized, blinded crossover experiment in a sample of 347 English clinicians, we demonstrate that changing the source of a research abstract from a low- to a high-income country significantly improves how it is viewed, all else being equal. Using fixed-effects models, we measured differences in ratings for strength of evidence, relevance, and likelihood of referral to a peer. Having a high-income-country source had a significant overall impact on respondents’ ratings of relevance and recommendation to a peer. Unconscious bias can have far-reaching implications for the diffusion of knowledge and innovations from low-income countries.

TOPICS

Unconscious bias may be influential in the publication and citation of research, and published research articles may be evaluated differentially based on the perceived (or actual) characteristics of the author—such as sex, rank, place of work, and country of origin. For example, article acceptance rates have been shown to be higher when first authors live in English-speaking high-income nations than when they live in non-English-speaking high-income nations.1 An author’s affiliation from the United States can increase his or her citation counts by 20 percent, and articles focusing on the United States or Europe have been reported to have a greater citation frequency compared with articles that focus on developing countries.2 One study found that the likelihood of non-US abstracts being accepted for an American Heart Association Scientific meeting was significantly higher when the abstracts were reviewed with the author affiliation omitted (1.81 compared to 1.41).3 Other studies have found that journals favor authors located in their own country.4,5 Carole J. Lee and colleagues refer to this as “nationality bias.”6 Although it is possible that articles with a focus on the United States are simply better articles, it is just as possible that scientists pay more attention to US articles while ignoring equally good articles conducted in a different location—a phenomenon that has been called the “Americanization” of science.7

There is a surprising lack of methodologically sound, robust, controlled studies to ascertain the effect of authors’ characteristics on research interpretation. Douglas Peters and Stephen Ceci’s controversial experiment found that previously published articles resubmitted to the same journal that published them were subsequently rejected when the fabricated author affiliations were altered to lesser-known institutions.8 Shining a light on the fallibility of the peer review process led to a plethora of descriptive studies based on citation counts and acceptance rates. However, in order to isolate the impact of bias in research evaluation, it is essential to control for the type and quality of the research and for the reviewer of the research itself. Four studies controlled for the type and quality of research in their assessment of the impact of social bias811 but not for the reviewer of the research. As noted in Rachel Bruce and colleagues’ systematic review,12 an important methodological challenge in experimentally assessing peer review processes is to do so without revealing the purpose of the study.

For this research, we were less interested in ensuring that two people agree in their judgment of a manuscript and more interested that the same person agree with him- or herself when confronted with the same manuscript whose sole difference is the institution and country of the author and the research. Given the paucity of research that controls for both the reviewer and the quality and type of research being reviewed, we conducted a randomized, controlled, and blinded crossover study to assess the within-individual change in evaluation of research abstracts when the source is experimentally altered—in this case, between high- and low-income countries.

Study Data And Methods

Trial Design

In our randomized, controlled, and blinded crossover experiment, participants rated the same abstracts on two separate occasions, one month apart, with the source of these abstracts changing, without their knowledge, between high- and low-income countries. To be included, participants needed to be medically qualified clinicians, of any specialty, living and practicing in England at the time of their participation.

Study Settings

We used a panel provider through the Qualtrics survey platform to recruit survey respondents. Qualtrics panels consist of curated lists of people interested in participating in social research online. Respondents who fully completed the survey in wave 1 were then contacted again for the wave 2 survey four weeks later, with the abstract sources they had received in wave 1 reversed (see Exhibit 1). Participants who did not complete wave 2 surveys received two reminder messages. Those who completed the survey were given an incentive for their time after each wave in accordance with market rates for clinician reimbursement for online surveys. The survey was soft-launched July 8, 2016, and fully launched on July 15.

Exhibit 1 Sources attributed to abstracts in the study of bias in assessing research attributed to high- and low-income countries

Groupa
ABCD
Sample size81898988
Journal attribution (all abstracts)NEJMNEJMJCMJCM
University/country attributions
 Control abstract:b antenatal care quality
  Wave 1OxfordOxfordOxfordOxford
  Wave 2OxfordOxfordOxfordOxford
 Abstract 1: randomized trial of DOTS treatment for tuberculosis (Note 13 in text)
  Wave 1FreiburgAddis AbabaFreiburgAddis Ababa
  Wave 2Addis AbabaFreiburgAddis AbabaFreiburg
 Abstract 2: cross-sectional comparison of HIV services in maternal and child health (Note 14 in text)
  Wave 1Addis AbabaFreiburgAddis AbabaFreiburg
  Wave 2FreiburgAddis AbabaFreiburgAddis Ababa
 Abstract 3: randomized trial of cholesterol-lowering drug rosuvastatin (Note 15 in text)
  Wave 1MzuzuHarvardMzuzuHarvard
  Wave 2HarvardMzuzuHarvardMzuzu
 Abstract 4: cross-sectional trial of methadone treatment in drug addicts (Note 16 in text)
  Wave 1HarvardMzuzuHarvardMzuzu
  Wave 2MzuzuHarvardMzuzuHarvard

SOURCE Authors’ own research. NOTES High- and low-income countries were chosen based on gross domestic product (GDP) per capita using 2015 World Bank data (see Note 17 in text). High-income countries were selected from the top-ten countries and by membership in the Organization for Economic Cooperation and Development; low-income countries were selected from the bottom-ten countries by GDP per capita. Control: University of Oxford, United Kingdom. High income: University of Freiburg, Germany; Harvard University, US. Low income: University of Addis Ababa, Ethiopia; University of Mzuzu, Malawi. NEJM is New England Journal of Medicine. JCM is Journal of Community Medicine and Health Education.

aRespondents randomized to Groups A–D to control for a possible order effect.

bControl abstract source remained unchanged between waves to control for between-wave variation in rating.

Interventions

The study team selected four abstracts from Cochrane Reviews to ensure that there was a high internal validity for the type of study being described and that the study was of at least some interest to most clinicians.1316 All four were of similar length and complexity. We also included one control abstract on the topic of mother-to-child transmission of HIV whose source (Oxford University, UK) did not change at all between the two rounds, to account for any within-individual variation in ratings over time.

Abstract sources, listed as “author affiliation,” were fictionalized for institution and country of origin. High-income source countries (United States and Germany) were selected from the top-ten countries by gross domestic product (GDP) per capita (more than US$36,000), and Organization for Economic Cooperation and Development (OECD) membership. Low-income source countries (Ethiopia and Malawi) were selected from the bottom-ten countries by GDP per capita (less than US$1,046 per capita), using 2015 World Bank data.17 The institutional affiliation was fictionalized to one of the respective countries’ top-five universities that also had a medical or health care faculty. These were Harvard University (US), Freiburg University (Germany), University of Addis Ababa (Ethiopia), and University of Mzuzu (Malawi). We used the 2014 Times Higher Education World University Rankings18 to choose the high-income-country sources, and the uniRank website,19 a source of international rankings of institutions, to choose the low-income sources. We also included the name of fictionalized journals, listed as “journal” in the abstract, to ascertain the relative effect of journal Impact Factor in the rating of the abstracts. The high-impact journal used was the New England Journal of Medicine (Impact Factor: 72.41)20 and the low-impact journal was the Journal of Community Medicine and Health Education (5-year Impact Factor: 2.15).21

Outcomes

Each abstract was accompanied by the same three questions. First, how strong is the evidence presented in this abstract? Second, how relevant to you is the research in the abstract? Third, how likely are you to recommend this abstract to a colleague? Responses were on a scale of 0–100, with 0 as not at all strong, relevant, or likely and 100 as extremely strong, relevant, or likely.

Sample Size

We conducted a between-group US pilot study to test the survey platform and obtain effect-size estimates.10 Based on the effect size detected in that study, we calculated that a sample size of fifty-two people completing both waves of evaluation is required to detect within-individual differences of 2 (scale 0–100) for each abstract for a power of 80 percent and type 1 error of 5 percent.

Randomization

After they completed screening questions to ensure that they met the inclusion criteria, respondents were randomized to groups A1 or B1 (for abstracts with NEJM as the journal type) or to C1 or D1 (for abstracts with JCM as the journal type) (Exhibit 1) and invited to rate the five abstracts (four experimental and one control). To avoid a possible order effect, the order in which the abstracts were presented in the survey was randomized for each participant. The survey platform used simple randomization occurring in real time as the respondent entered the survey, so that the respondent was unaware that any randomization had taken place. The survey type (A1, B1, C1, or D1) to which respondents were randomized in wave 1 dictated the survey type that they subsequently received in wave 2 (A2, B2, C2, or D2) one month later.

Blinding

So that the purpose of the study did not influence the responses, the survey was described as a speed-reading survey, and we requested respondents to complete the reading and rating of each abstract as quickly and carefully as possible, to enhance anchoring and fast thinking.22 The time taken to read and respond to each abstract was measured by the survey platform and presented to each respondent upon completion of the survey to heighten the “psychological realism” of the survey. We assessed the success of the blinding by asking respondents, at the conclusion of the second survey, whether they had noticed any changes in the survey design between the two waves. Each wave included a mix of two low-income and two high-income-country sources, so that respondents were not likely to become aware of the purpose of the study when the sources changed from one wave to the next.

Analysis

Data were retrieved via Qualtrics in CSV format and analyzed using Stata/SE 13. We used demographic (age, sex, country of birth) and professional experience (research exposure, peer-review experience, educational attainment) covariates to assess balance between the groups. Respondents’ ages were calculated based on a presumed midyear birth and survey completion date of January 31, 2017. We first calculated the mean abstract ratings for each abstract and for each question, and then compared the mean within-individual difference in ratings between abstracts with high- and low-income-country sources using two-tailed t-tests. We also calculated the difference in ratings between the two survey blocs (that is, Groups A and B, and Groups C and D), which gives an indication of the effect of journal type (high versus low impact). We then estimated fixed-effects models for each abstract and outcome independently, as well as for all abstracts pooled, to account for unobserved heterogeneity at the respondent level, and we included a wave dummy to control for any trend in ratings between waves. We conducted sensitivity analyses in which we excluded the three respondents who recognized that the sources had changed between the waves (results not shown).

Ethical Considerations

Data were analyzed at the individual level, deidentified, and aggregated to the sample population. The mild, nonharmful deception regarding the purpose of the study was necessary because awareness of the objective of the research was likely to bias responses. It had a negligible impact on the respondents’ experiences with the survey. Potential respondents were recruited from panel management companies that specialize in providing survey services, which means that they had already agreed to participate in research surveys. Participants received an incentive for their time after each wave based on the panel management companies’ best practices for reimbursing clinicians for online surveys. The data were stored on password- and firewall-protected computers at Imperial College London that were accessible only by the researchers involved. The Imperial College London Research Ethics Committee and Joint Research Compliance Officer approved the protocol for the research (ICREC 16IC3400).

Limitations

Several limitations are worthy of mention. First is the potential of selection bias in an online survey, which could affect the representativeness of the findings but only if participation in the survey is associated with bias against research from low-income countries. We addressed this issue by blinding the participants to the purpose of the study. Second, the specific definitions of strength of evidence, relevance, and referral to a peer may be perceived differently by different participants. We addressed this issue by examining only within-individual variation in responses.

Third, this study did not delineate the mechanism through which changing sources affects individuals’ reviews of the research abstracts, so we cannot be certain whether these are due to unconscious or conscious biases. We used the high-low-income-country anchor to tease out potential applied biases in our respondents, but it was beyond the scope of the study to examine the bias or biases that are playing out in the minds of our respondents. We were able only to empirically measure the effect of those biases. Finally, our research used abstracts and not full research articles; it is possible, although unlikely, that reviews might be different if full research articles were used. Forms of social bias will manifest and be reproduced at the point of consumption, whether the research is in abstract or long form.

Study Results

We obtained 551 complete responses at baseline (wave 1). Of those, 63.0 percent completed the survey at follow-up approximately thirty days later (mean: 28.3; 95% confidence interval: 28.0, 28.7), resulting in a longitudinal sample of 347 complete responses.

Respondents were comparable within each group for a range of variables including sex, age, country of birth, years since qualification for clinical practice, percentage holding a doctoral degree, time spent in clinical practice, time spent reading the abstracts, and experience with peer review (Exhibit 2).

Exhibit 2 Characteristics of respondents to the study of bias in assessing research attributed to high- and low-income countries

Respondent groupa
CharacteristicABCD
Male69.1%65.2%71.9%79.3%
Mean age (years)45.545.544.143.5
Born in the UK55.6%66.3%62.9%69.3%
Doctoral degree14.8%12.4%19.1%26.1%
Frequent consumer of researchb33.3%32.6%39.3%38.6%
Frequent peer reviewer of researchb4.90%1.12%6.70%6.80%
Mean time since qualification (years)21.021.420.319.3
Proportion spending 3 or more days in clinical practice per week95.1%97.8%93.3%94.3%
Survey response
 Mean time spent per abstract (seconds)83.071.776.470.0
 Mean time between waves 1 and 2 (days)27.928.128.928.3

SOURCE Authors’ own research. NOTES High- and low-income countries are described in the text. A version of this exhibit containing 95% confidence intervals is in the online Appendix. To access the Appendix, click on the Details tab of the article online.

aRespondents were randomized to Groups A–D to control for a possible order effect.

bMore than 2 research articles per week.

The combined results in Exhibit 3 show that high-income-country source had a significant overall impact on relevance (mean: 4.50; 95% CI: 3.16, 5.83) and recommendation (mean: 3.05; 95% CI: 1.77, 4.33). Perceived relevance was affected for three abstracts (abstract 1, mean: 2.69; 95% CI: 0.27, 5.11; abstract 2, mean: 5.51; 95% CI: 3.01, 8.02; and abstract 3, mean: 8.09; 95% CI: 5.34, 10.84). Likelihood of recommendation to a peer was affected for three abstracts (abstract 1, mean: 2.76; 95% CI: 0.27, 5.26; abstract 3, mean: 5.50; 95% CI: 2.75, 8.25; and abstract 4, mean: 3.07; 95% CI: 0.54, 5.60). The overall impact of high-income-country source on assessment of the strength of the evidence in the abstracts was positive but not quite statistically significant (mean: 1.35; 95% CI: −0.06, 2.76), although a significant impact of high-income-country source on strength of evidence was found for abstract 3 (mean: 3.98; 95% CI: 1.16, 6.79). Findings were not altered when the three respondents (less than 1 percent of the sample) who noticed the change in sources were excluded from the analysis in our sensitivity analysis (data not shown).

Exhibit 3 Respondents’ ratings of abstracts, by country income and journal Impact Factor, study of bias in assessing research attributed to high- and low-income countries

Country income group
Journal Impact Factor
LowHighDifference in ratingsaHigh (72.41)Low (2.15)Difference in ratingsb
Abstract 1: randomized trial of DOTS treatment for tuberculosis
Strength, mean50.950.3−0.5450.251.0−0.79
Relevance, mean26.429.12.69**26.029.4−3.42
Recommendation, mean27.329.92.76**28.129.1−0.94
Abstract 2: cross-sectional comparison of HIV services in maternal and child health
Strength, mean42.944.91.9743.943.9−0.04
Relevance, mean24.329.75.51****27.726.31.43
Recommendation, mean26.327.61.3027.726.11.60
Abstract 3: randomized trial of cholesterol-lowering drug rosuvastatin
Strength, mean55.859.83.98****57.358.3−1.00
Relevance, mean34.442.48.09****36.340.3−4.04**
Recommendation, mean33.639.05.50****34.438.0−3.60
Abstract 4: cross-sectional trial of methadone treatment in drug addicts
Strength, mean39.539.50.1038.940.1−1.25
Relevance, mean25.827.92.0025.628.1−2.45
Recommendation, mean24.527.63.07**26.126.00.04
Combined results across abstracts
Strength, mean47.349.41.3548.448.7−0.30
Relevance, mean27.733.44.50****30.431.8−1.44
Recommendation, mean27.932.13.05****30.330.5−0.28

SOURCE Authors’ own research. NOTES N=347 clinician respondents. Data represent mean scores on a scale of 0 to 100, with 0 as not at all strong, relevant, or likely and 100 as extremely strong, relevant, or likely. High- and low-income countries are described in the text. A version of this exhibit containing standard deviations and 95% confidence intervals is in the online Appendix. To access the Appendix, click on the Details tab of the article online. DOTS is directly observed treatment, short course.

aFixed-effects model, to account for unobserved heterogeneity at respondent level, and adjusted for wave.

bUnadjusted between-group comparisons.

**p<0.05

****p<0.001

There was no significant effect of the interaction between the journal type and the country source—the effect of country source was the same, regardless of the journal type. The effect of changing the country source was much more significant than the effect of journal type. In the between-group analysis (Exhibit 3), differences in ratings were insignificant for all abstracts except abstract 3, for which relevance was rated lower with a higher-impact journal source (mean: −4.04; 95% CI: −8.16, −0.08).

Discussion

This study is, to the best of our knowledge, the first to measure the presence of explicit bias in abstract review controlling for both the reviewer and the research that was evaluated. We found that changing the source of an abstract from a low- to a high-income country led to a significant increase in the perceived relevance of the abstract and the subsequent likelihood of referral of the abstracts to others. This finding was unaffected by whether the abstract was listed as having been published in a high- or a low-impact journal. The positive effect of changing the source from a low- to a high-income country was significant for the rating of the strength of the evidence for one abstract (describing a randomized controlled trial) and also very nearly significant for rating the strength of the evidence of all of the abstracts.

For relevance and recommendation, ideally we would not expect the change in country sources to have any effect, and yet for some of the abstracts, all else being equal, the effect was significant. The change in rating was up to 25 percent of the mean score for the abstract in some instances. For the strength of evidence, there are well-developed and well-known criteria upon which this can be assessed, such as the hierarchy of evidence, where randomized controlled trials and other experimental designs are preferred over observational designs. Thus, we were not expecting the effect of changing the source country to have any impact on this outcome. Yet even this measure was affected positively and significantly in one of the abstracts, describing a randomized controlled trial, by changing the country source from low to high income.

The marketing literature has known of the effect of country of origin on product evaluation for several decades,2325 and the recruitment industry has made strides to ensure that candidates are treated fairly and equally by removing all identifiable information from curriculum vitae and job applications. The research community needs to learn from these industries to avoid unwarranted admiration for research from some contexts to the detriment of others, particularly when the research is based on characteristics that are completely unrelated to an article’s scientific merit.

This study touches on issues of external validity, or generalizability of research, and how consumers of medical research understand, measure, and perceive it. Whether or not research is considered relevant ought not to be affected by where that research has come from, given that there are no accurate ways to assess how comparable two contexts are or are not—if, indeed, relevance were to be based on context comparability at all. It follows, therefore, that any change in the perceived relevance of research, from one country compared to another, is due to unconscious (or conscious) cognitive biases, particularly if the research being judged is absolutely identical in every other respect—which was the case in this study. Bias might not be conscious or malicious; it is simply a judgment as to whether results from a study will or won’t apply in another context.26 There are some tools, albeit imperfect ones, to help researchers decide whether the results from a study apply in another setting, and statistical modeling of research findings,27 reweighting participant characteristics,28 or embedding randomized controlled trials within large data sets29 can approximate estimates of generalizability. Cochrane’s DECIDE Framework and CERQUAL framework are useful for exploring whether the generating mechanisms that explain intervention and outcome in the research context are present in the adopter context. However, none of these techniques are available to all, all of the time, in every research topic. Even reporting guidelines, of which there are now over 300,30,31 will not necessarily prevent errors and biases in the interpretation of the report. While tools exist to measure the internal validity of a research study, there are very few tools to assess its external validity, and there are no approaches that effectively remove the possibility of bias in this judgment. Preconceived notions of what does or does not constitute generalizable research will likely be heavily influenced by prior beliefs.

We expect that much research from low-income countries has been and will continue to be discounted prematurely and unfairly.

Based on the findings from this study, we expect that much research from low-income countries, even once it has passed through other publication barriers of peer review, has been and will continue to be discounted prematurely and unfairly, through biased assessment of either its rigor or its relevance. In previous research we found several barriers to the adoption of innovations from low-income countries, including frank prejudice;32 doubt as to whether contexts are similar enough to learn from;33 and, in international health partnerships, the presumption that low-income countries have nothing to teach, only to learn.34 Health care workers in the United States and United Kingdom might consider it unusual for low-income countries to be viewed as sources of innovation.35 So-called reverse innovation—the adoption of innovations from low- to high-income contexts—has been increasingly studied, predominantly in the management and business literatures, with a focus on the firm, but increasingly also in the health policy space.36 Identifying low-cost models of care that do not compromise on quality is a priority for many health systems. It is important to ensure that low-income countries are not discounted prematurely as a source of innovation but continue to be recognized as important participants in international health partnerships and overseas volunteering37 as well as a critical element of an innovation ecosystem.38

Although there have been advances to protect against bias in the prepublication stage, such as blinded and open peer review, there are no protections in place once the article is in the general domain to ensure that the assessment of such research is free from bias.

Nationality bias is not necessarily the only bias potentially at work. Lee describes several other types, such as prestige bias, confirmation bias, low interrater reliability, affiliation bias, bias as a function of reviewer characteristics, content-based bias, and conservatism.6 In 1968 the well-known sociologist Robert Merton with his wife, Harriet Zuckerman, coined the term “the Matthew Effect” when the perceived reputation of the author influences the significance of the article. They noted that the effect “violates the norm of universalism embodied in the institution of science and curbs the advancement of knowledge.”39 Scientists are actually applying heuristic methods, or mental shortcuts, when conducting evaluation tasks, even if they are not aware (or will not admit to being aware) of them.40 Several approaches to evaluating content are potentially in play: the “take-the-first” heuristic, the “fluency” heuristic, the “take-the-best” heuristic, and the “recognition” heuristic.40 While we consider the main distinguishing feature between Germany and Ethiopia and between the United States and Malawi to be GDP per capita, some respondents might have considered other characteristics. The elicited bias is a complex function of the reviewer, the abstract content, and the sources that were used. It is possible that the accentuated effect of source bias we found for abstract 3 is related in part to the content of the abstract (a pharmaceutical randomized controlled trial) or that the country sources for this abstract (Malawi and the United States) tend to elicit a greater bias. This could be investigated in further research.

Conclusion

One’s viewpoint of a research article is a complex function of the context of the research, the research itself, and the consumer of that research.

Our study makes a significant contribution to the literature on peer reviewing because we not only controlled for the reviewer, the abstract, and the order in which the abstracts were rated, but we also blinded reviewers to the purpose of the study. One’s viewpoint of a research article is a complex function of the context of the research, the research itself, and the consumer of that research. We believe that there is merit in being more ambitious than simply publishing articles and hoping for the best. Options are to remove all author-identifiable information from published articles and use meta-data instead to track citation rates and impact. Another alternative is to develop risk-of-bias tools, similar to the Cochrane tools, that are applied not to the research but to oneself. Either way, there is a clear need for further behavioral studies of evidence-based medicine, including factorial design studies to measure the impact of different sources, in different populations, using different types of research abstracts.

ACKNOWLEDGMENTS

The study was supported by an Imperial College–National Institute for Health Research (NIHR) Biomedical Research Centre (BRC) grant (No. WSSS-P61853). NIHR and BRC had no role in the study design, collection, analysis, and interpretation of the data; the writing of the report; or the decision to submit the article for publication.

NOTES

  • 1 Ross JS, Gross CP, Desai MM, Hong Y, Grant AO, Daniels SRet al. Effect of blinded peer review on abstract acceptance. JAMA. 2006;295(14):1675–80. Crossref, MedlineGoogle Scholar
  • 2 Link AM. US and non-US submissions: an analysis of reviewer bias. JAMA. 1998;280(3):246–7. Crossref, MedlineGoogle Scholar
  • 3 Daniel H-D. An evaluation of the peer review process at Angewandte Chemie. Angew Chem Int Ed Engl. 1993;32(2):234–8. CrossrefGoogle Scholar
  • 4 Ernst E, Kienbacher T. Chauvinism. Nature. 1991;352:560. Crossref, MedlineGoogle Scholar
  • 5 Tregenza T. Gender bias in the refereeing process? Trends Ecol Evol. 2002;17(8):349–50. CrossrefGoogle Scholar
  • 6 Lee CJ, Sugimoto CR, Zhang G, Cronin B. Bias in peer review. J Am Soc Inf Sci Technol. 2013;64(1):2–17. CrossrefGoogle Scholar
  • 7 Frey BS, Eichenberger R. American and European economics and economists. J Econ Perspect. 1993;7(4):185–93. CrossrefGoogle Scholar
  • 8 Peters DP, Ceci SJ. Peer-review practices of psychological journals: the fate of accepted, published articles, submitted again. Behav Brain Sci. 1982;5(2):187–95. CrossrefGoogle Scholar
  • 9 Borsuk RM, Aarssen LW, Budden AE, Koricheva J, Leimu R, Tregenza Tet al. To name or not to name: the effect of changing author gender on peer review. Bioscience. 2009;59(11):985–9. CrossrefGoogle Scholar
  • 10 Harris M, Macinko J, Jimenez G, Mahfoud M, Anderson C. Does a research article’s country of origin affect perception of its quality and relevance? A national trial of US public health researchers. BMJ Open. 2015;5(12):e008993. Crossref, MedlineGoogle Scholar
  • 11 Godlee F, Gale CR, Martyn CN. Effect on the quality of peer review of blinding reviewers and asking them to sign their reports: a randomized controlled trial. JAMA. 1998;280(3):237–40. Crossref, MedlineGoogle Scholar
  • 12 Bruce R, Chauvin A, Trinquart L, Ravaud P, Boutron I. Impact of interventions to improve the quality of peer review of biomedical journals: a systematic review and meta-analysis. BMC Med. 2016;14(1):85. Crossref, MedlineGoogle Scholar
  • 13 Clarke M, Dick J, Zwarenstein M, Lombard CJ, Diwan VK. Lay health worker intervention with choice of DOT superior to standard TB care for farm dwellers in South Africa: a cluster randomised control trial. Int J Tuberc Lung Dis. 2005;9(6):673–9. MedlineGoogle Scholar
  • 14 Chabikuli NO, Awi DD, Chukwujekwu O, Abubakar Z, Gwarzo U, Ibrahim Met al. The use of routine monitoring and evaluation systems to assess a referral model of family planning and HIV service integration in Nigeria. AIDS. 2009;23(Suppl 1):S97–103. Crossref, MedlineGoogle Scholar
  • 15 Peters SA, Palmer MK, Grobbee DE, Crouse JR, O’Leary DH, Raichlen JSet al. C-reactive protein lowering with rosuvastatin in the METEOR study. J Intern Med. 2010;268(2):155–61. Crossref, MedlineGoogle Scholar
  • 16 Mark HD, Nanda J, Davis-Vogel A, Navaline H, Scotti R, Wickrema Ret al. Profiles of self-reported HIV-risk behaviors among injection drug users in methadone maintenance treatment, detoxification, and needle exchange programs. Public Health Nurs. 2006;23(1):11–9. Crossref, MedlineGoogle Scholar
  • 17 World Bank. GDP per capita (current US$) [Internet]. Washington (DC): World Bank; [cited 2017 Sep 11]. Available from: https://data.worldbank.org/indicator/NY.GDP.PCAP.CD Google Scholar
  • 18 Times Higher Education. World university rankings 2014–15 [Internet]. London: Times Higher Education; [cited 2017 Sep 11]. Available from: https://www.timeshighereducation.co.uk/world-university-rankings/2014-15/world-ranking Google Scholar
  • 19 uniRank. World universities search engine [home page on the Internet]. Pyrmont (New South Wales, Australia): uniRank; [cited 2017 Sep 11]. Available from: http://www.4icu.org Google Scholar
  • 20 New England Journal of Medicine. Author Center: home [Internet]. Boston (MA): NEJM; c 2017 [cited 2017 Oct 17]. Available from: http://www.nejm.org/page/author-center/home Google Scholar
  • 21 Journal of Community Medicine and Health Information [home page on the Internet]. Henderson (NV): Omics International; c 2017 [cited 2017 Oct 12]. Available from: https://www.omicsonline.org/community-medicine-health-education.php Google Scholar
  • 22 Tversky A, Kahneman D. Judgment under uncertainty: heuristics and biases. Science. 1974;185(4157):1124–31. Crossref, MedlineGoogle Scholar
  • 23 Dinnie K. Country-of-origin 1965–2004: a systematic literature review. J Cust Behav. 2004;3(2):165–213. CrossrefGoogle Scholar
  • 24 Bilkey W, Nes E. Country-of-origin effects on product evaluations. J Int Bus Stud. 1982;13(1):89–100. CrossrefGoogle Scholar
  • 25 Peterson RA, Jolibert A. A meta-analysis of country-of-origin effects. J Int Bus Stud. 1995;26(4):883–900. CrossrefGoogle Scholar
  • 26 Kukull WA, Ganguli M. Generalizability: the trees, the forest, and the low-hanging fruit. Neurology. 2012;78(23):1886–91. Crossref, MedlineGoogle Scholar
  • 27 Steventon A, Grieve R, Bardsley M. An approach to assess generalizability in comparative effectiveness research: a case study of the Whole Systems Demonstrator Cluster Randomized Trial comparing telehealth with usual care for patients with chronic health conditions. Med Decis Making. 2015;35(8):1023–36. Crossref, MedlineGoogle Scholar
  • 28 Cole SR, Stuart EA. Generalizing evidence from randomized clinical trials to target populations: the ACTG 320 trial. Am J Epidemiol. 2010;172(1):107–15. Crossref, MedlineGoogle Scholar
  • 29 Staa TP, Goldacre B, Gulliford M, Cassell J, Pirmohamed M, Taweel Aet al. Pragmatic randomised trials using routine electronic health records: putting them to the test. BMJ. 2012;344:e55. Crossref, MedlineGoogle Scholar
  • 30 Equator Network. Enhancing the quality and transparency of health research [Internet]. Oxford: University of Oxford, Centre for Statistics in Medicine; [cited 2017 Sep 11]. Available from: http://www.equator-network.org/ Google Scholar
  • 31 Rennie D. Let’s make peer review scientific. Nature. 2016;535(7610):31–3. Crossref, MedlineGoogle Scholar
  • 32 Johnson C, Noyes J, Haines A, Thomas K, Stockport C, Harris M. Learning from the Brazilian community health worker model in North Wales. Global Health. 2013;9:25. Crossref, MedlineGoogle Scholar
  • 33 Harris M, Weisberger E, Silver D, Macinko J. “They hear ‘Africa’ and they think that there can’t be any good services”—perceived context in cross-national learning: a qualitative study of the barriers to Reverse Innovation. Global Health. 2015;11:45. Crossref, MedlineGoogle Scholar
  • 34 Kulasabanathan K, Issa H, Bhatti Y, Prime M, Del Castillo J, Darzi Aet al. Do international health partnerships contribute to reverse innovation? A mixed methods study of THET-supported partnerships in the UK. Global Health. 2017;13(1):25. Crossref, MedlineGoogle Scholar
  • 35 Harris M, Bhatti Y, Prime M, del Castillo J, Parston G, Darzi A. Global diffusion of healthcare innovation: making the connections. Doha (Qatar): World Innovation Summit for Health; 2016. Google Scholar
  • 36 Hadengue M, de Marcellis-Warin N, Warin T. Reverse Innovation: a systematic literature review. International J Emerg Mark. 2017;12(2):1–25. Google Scholar
  • 37 Tropical Health Education Trust. In our mutual interest. London: THET; 2017 Apr [cited 2017 Sep 12]. Available from: http://www.thet.org/resource-library/in-our-mutual-interest Google Scholar
  • 38 Heitmueller A, Bull A, Oh S. Looking in the wrong places: why traditional solutions to the diffusion of innovation will not work. BMJ Innov. 2016;2(2):41–7. CrossrefGoogle Scholar
  • 39 Merton RK. The Matthew Effect in science. Science. 1968;159(3810):62. CrossrefGoogle Scholar
  • 40 Bornmann L. Complex tasks and simple solutions: the use of heuristics in the evaluation of research. J Assoc Inf Sci Technol. 2015;66(8):1738–9. CrossrefGoogle Scholar
   
Loading Comments...