Faulty practice in field biology – what should be done?

Tomasz Wesołowski

Laboratory of Forest Biology, Wrocław University, Sienkiewicza 21, 50 335 Wrocław, Poland

 

Abstract: I recently expressed my concerns related to the use of nest-boxes in ornithological research. The issues raised there, however, seemed symptomatic of deeper problems in the practice of field biology, which could undermine the integrity of the science we pursue, a concern shared by others in ornithology and in a wider zoological community. Here I outline some major problems in field biology (deficiencies in the study design, style of data reporting and editorial procedures), identify possible causes, and propose feasible solutions. There are several problems in the practice of field biology which, if not remedied, could have harmful long-term effects. The authors intending to publish many papers in the most prestigious journals tend to frame titles using very broad terms to overestimate their importance and overgeneralise the results. Exploratory, observational, studies are undervalued (treated as a sort of inferior science, not deserving publication in high-profile journals). Field studies are done and published by people without basic field skills/taxonomic knowledge (an especially acute problem in poorly known tropical regions). Moreover, field procedures are inadequately described (others cannot evaluate the quality of work and replicate the study if necessary). Field data can be underreported (biological data in the results section are replaced by outcomes of statistical analyses). Proper credit to earlier work is missing. Many authors tend to ignore earlier sources and refer only to the most recently published papers. I think it is possible to relieve the discussed problems. In order to do so, we have to “dethrone” publications, cease to treat them as if they were the purpose of scientific work and the sole measures of scientific output. To improve the situation, we also have to require that journal editors (1) modify the list of requirements distributed in Guidelines for the authors to include aspects crucial for proper documentation; (2) extend the list of questions which referees have to address, to include the above aspects as well, and (3) consistently reject all submissions not fulfilling these minima. Additionally, the journals would have to stop promoting unsubstantial quality criteria, e.g. “impact factors”.

Key words: descriptive study, documentation standard, field methods, generalization, natural history, publication, replication, scientific integrity

Published: 30 September 2012

Copyright: © 2012 Wesołowski. This is an open-access article distributed under the terms of the Creative Commons Attribution Licence, which permits restricted use, distribution and reproduction in any medium, provided the original author and sources are credited.

E-mail: tomwes@biol.uni.wroc.pl

Citation: Wesołowski T. 2012. Faulty practice in field biology – what should be done?. www.forestbiology.org/articles/FB_01: 1-7.


Introduction

Marcel Lambrechts and myself each reported on the concerns related to the use of nest-boxes in ornithological research (Lambrechts et al. 2010, Wesołowski 2011). The issues we raised, however, seemed symptomatic of deeper problems in the practice of field biology, which could undermine the integrity of our science, a concern shared by others in ornithology (Beehler 2010, Bijlsma 2010, participants of round table discussions at recent ornithological conferences: IOC Brazil 2010, EOU Latvia 2011), and more widely (e.g. Brischoux Cook 2009, Arnold 2009, Hołyński 2010, Schwenk 2101, Lindenmayer Likens 2011).

Here I outline some major problems in field biology (deficiencies in study design, style of data reporting and editorial procedures), identify possible causes, and propose feasible solutions. Contrary to the standard scientific practice, I do not cite particular examples of inadequate practice, though every case described below has its real-life counterpart. This is so because my aim is not to point finger, but simply to trigger in readers’ minds examples from their own experience which, if I am correct, have been encountered or even committed by themselves. As I contend that deficiencies are widespread, it would be unfair to single out individual scientists.

 

Problems

Goals and means of scientific study – a reminder. First of all, we have to remind ourselves why we study organisms, what the purpose of our work is. We study animals to expand our knowledge about them, about their functional relationships with other components of the environment, to increase our understanding of their biology. Moreover we often use animals as models to investigate broader scientific problems.

We can apply an array of tools, such as equipment (e.g. binoculars, data loggers), logical constructs (study design, hypothesis testing) and analytical techniques (statistics, phylogenetic contrasts, DNA sequencing). Importantly, we are obliged to document in a publicly accessible form our research questions, the tools, logical constructs and analytical techniques that have been employed to investigate the natural phenomena. Our goal, however, is to increase the knowledge and understanding of animals (the living world at large).

There are universally accepted prescriptions for using the tools mentioned and for reporting the results obtained. To be rigorous, researchers must (1) accurately identify study objects and correctly use devices for collecting data; (2) provide full descriptions of the methods employed to permit others to replicate protocols; (3) carefully document observations/experiments in the results section of papers, in order to allow independent evaluation of the quality of data; (4) critically assess the bias in the data in the discussion, evaluate how far the data can legitimately be extrapolated, and determine how representative the data sets (conditions of study) are for a wider range of conditions; and (5) give due credit to work of earlier students. Any contribution not satisfying these five requirements should be rejected from publication. Yet, such papers are often accepted. Why is it so?

A changing purpose of scientific investigation? Judging from the symptoms, it appears that the purpose of publication has evolved from a means of information dissemination to the goal of scientific endeavour. It seems that this change of focus lies behind the most currently observed problems (see below). Of course, scientists have a duty to report their findings, but if the emphasis is placed too much on the publication stage at the expense of the other steps, the results are very likely to be counterproductive in the long run.

This shift of focus is probably due to large-scale changes in the academic environment. We live in a highly competitive culture, in which people and organizations have to compete fiercely for positions, funds and prestige. Field biologists are part of this system. Now their “quality” and efficiency is being increasingly (sometimes solely) judged and evaluated based on the number of publications they produce and on where those publications occur, much more than on the actual scientific contribution their publications represent (e.g. Brischoux Cook 2009, Schwenk 2010, Godfray 2010, Wägele et al. 2011). A short-term gain is promoted over the long-term pay-offs of quality and advancement.

The pressure to publish many papers in the most prestigious journals (meaning with the greatest impact factor) may lead authors to think about how to “sell” their works most efficiently to the reviewers and editors of those journals (e.g. Brischoux Cook 2009 and refs therein). The editors, on the other hand, pressed to increase journal prestige, may be tempted to take into account whether the paper is going to be popular enough to be cited frequently shortly after its publication. I admit that authors and journals are strongly motivated to inflate the importance of publications, as evidenced by their succumbing to exaggeration, usage of “sexy” terms and of titles promising more than could actually be delivered (see below). Many of them still resist these temptations, but a growing number of individuals unfortunately fall victim to these persuasion techniques. Whether such tricks of the advertising trade necessarily belie problems with the conduct of the science itself is hard to say without a more rigorous review of particular cases. However, keeping up appearances should be discouraged in the rigorous scientific documentation.

Exaggeration of study importance. We often study individual species or communities within restricted geographical areas; results can be strongly context dependent, full of idiosyncrasies. They may be generalized, but only to a limited extent. We are aware that carrying out an analogous study on another species, or on the same species in a different place or year, may yield different results. In contrast, studies at molecular level can produce much more consistent results, largely independent of the localization of the lab or the year the analysis is performed. A change of study species would often make no difference. No wonder then that field biologists, forced to compete with white-coat lab colleagues, tend to improve their chances by boasting the implications of their work. For example the title “Reproduction of Mus major in relation to weather in two woodlots, in 2000-2004” might adequately describe the content of the paper, but would not help it to get accepted by a prestigious journal. With a little “face-lifting”, however, the title becomes “Rodent metapopulation (mice were observed in two spatially separated patches) dynamics (their reproduction rates varied) and global climate change (spring temperatures fluctuated across years): a long term study” (any study longer than three years qualifies), increasing the chances the paper will be sent out for review. Thus, a little “tinkering” with the title can make the contribution sound more interesting, magically changing it from a manuscript that appears to be “dull, parochial, singular” to one that is about “serious, ground-breaking science”. If you can achieve so much with such small effort, it is almost impossible to resist the temptation to puff the importance of your study, to pretend it is of greater generality than it actually is. One commonly used trick is to refrain from using the name of the species in the title. Instead one studied a “Palearctic warbler” or – better – a “migratory passerine”, or – best of all – a “migratory bird” Titles framed in such a way suggest that the results of a single species local study can be generalized to all migratory birds on all continents. These implied claims are dishonest but, as they apparently help in “selling” articles to editors or grant applications to funding agencies, more and more researchers tend to employ such means.

Discrimination of exploratory, observational, studies. Contributions to science appear to be judged not only by their content but also by the type of methodological approach used. Some approaches – “descriptive” or “natural history” studies – tend to be treated as a sort of inferior science, not deserving publication in high-profile journals (e.g. Villard Nudds 2009, Donald 2010). On the other hand, “hypothesis-driven manipulative experiments” are considered superior, thought to constitute the “Scientific Method”, and the sole way to demonstrate cause and effect. This claim has a long tradition. Mayr (1997) wrote: “The experiment came to be treated as if it were the only valid scientific method. Any other method was considered inferior science. But since it was not in good taste to call one’s colleagues bad scientists, these other non-experimental sciences came to be called descriptive sciences”. Some opponents are more tolerant and admit that observational studies belong to the realm of science, but only provided that they are devoted to hypothesis testing.

It seems that, nowadays, the methodological approach used to answer a scientific question sometimes happens to be valued more than the data that are actually gathered, and that we have lost sight of the fact that careful exploratory studies can often open completely new avenues. Oliver (1991) made this point very clear: “Experiments are only means of obtaining a particular kind of observations, and the proposing of a hypothesis is merely a step in the organization of observations. Nor is the particular procedure, i.e. the scientific method as described here, the only valid means of advancing science. Exploration of the unknown in the absence of hypothesis is a perfectly valid way to proceed in science and often a preferable one if major discovery is the goal”. Despite the abundant evidence demonstrating that curiosity driven exploration repeatedly led to major discoveries, and that serendipity was a necessary ingredient of numerous breakthroughs in science (e.g. Selye 1964, Oliver 1991, Mayr 1997, Beehler 2010), some journals explicitly exclude descriptive studies from their pages.

Charles Darwin, who proposed the most important theoretical concept of contemporary biology – evolution by natural selection – was basically a natural historian as well. Darwin’s theoretical achievements stemmed from his thorough observations of the living world. He asked the right questions because they were generated by a necessity to explain phenomena occurring in nature, whereas the researchers who have no solid background in natural history will often waste their time trying to find answers to wrong (trivial) questions, testing irrelevant hypotheses (see e.g. examples presented by Valkiũnas (2005) and Weisman (2008). Models and statistics are powerful tools when applied to help us understand natural phenomena but, when used in a vacuum, they can at best produce some spurious results.

The methodological discrimination against observational and natural history studies is harmful (Beehler 2010, Schwenk 2010, Lindenmayer Likens 2011). Referees, editors and funding agencies alike should be discouraged from disparaging the methodological approaches which are non-experimental. Clearly, there can be both pointless or badly documented experiments and poor descriptive studies, but let the quality of a contribution/application, and not the method used, become the sole criterion of a paper’s acceptance.

Lack of basic field skills/taxonomic knowledge. A minimum requirement for doing any field study is that researchers must be able to identity their study objects. They must be able to identify species and – if necessary – sex or age class of individuals. Back in the past, when field researches collected data mostly by themselves, they knew intimately the natural history of the species they studied, and were well acquainted with their study areas; see for instance writings of Charles Darwin, Niko Tinbergen or, for more recent ornithological examples, Bijlsma (2010). These scientists really knew what they were speaking about. They could asses data quality and intelligently interpret the biological meaning of their observations. Unfortunately, this requirement is increasingly not being satisfied nowadays.

A growing number of researches have little familiarity with their study objects (Bijlsma 2010, Wägele et al. 2011 and refs therein). Instead, they work with data collected by field assistants and they know their species only from names on the computer screen, or they carry out faunistic surveys without being able to identify species in the field. This second problem is especially acute in species-rich, poorly known tropical regions. Scientists with scanty prior knowledge of the local fauna, having paid only short visits to the study area, can “produce” papers on avian richness and abundance that miss up to two thirds of the species (including some of the most common ones) known to occur in the area. Indeed, there is no guarantee that the species listed have correctly been identified. And what is more, the referees who accept such scripts, apparently have little familiarity with local faunas or, still worse, they are unaware of their lack of knowledge.

Inadequate description of field procedures. The methods section of a scientific article should be detailed enough to enable the reader to evaluate the quality of work and to replicate the study if necessary. Unfortunately, this is not always the case nowadays. Less and less attention in the methods section is given to the details of field procedures. For example, Lambrechts et al. (2010) show that the percentage of the published nest-box studies which provide data on at least one feature of nest-boxes dropped from 60% before 1992 to 30% afterward. An increasing number of current papers fail even to state that the study was done in nest-boxes. This reduction in rank of the field method description appears understandable. If one claims to have provided general solutions (see above), one should not devote too much attention to details of field procedures used in the local area.

The lack of sufficient information on the data gathering methods, though, renders a study virtually groundless. The reader has no means to assess how the details of observational routines or experimental procedures could have affected the results. Having obtained discordant results in a replicated study, one cannot tell whether the differences stem from disparate methods or they depict real biological differences between the study sites/objects.

Underreporting of field data. The results part of the paper, as the name implies, should be a transparent report of field observations/measurements, permitting readers to make their own assessments and recalculations. It is inherently descriptive. Regrettably, nowadays one can read papers which contain not a single biological datum in the results section! And these are not only theoretical or modelling papers, but can as well be ones whose authors claim to be presenting results of their field observations or experiments. The information is reduced to the outcomes of statistical analyses. Their biological meaning (if any) is well hidden behind statistical jargon and it is up to the reader to extract it. Authors might report, for example, to have found that “there was a statistically significant correlation (r = -0.45, P < 0.001, N = 38) between spring temperatures and the egg-laying dates” while they could have been much more forthright, saying “in cold springs beetles tended to lay eggs later (r = -0.45, P < 0.001, N = 38)”. However, as there is no information on the observed distribution of the laying dates, nor values of spring temperatures, so there is no way to assess this relationship or to compare the results with observations carried out in other places or on other species.

This style of presentation, hiding the biological information behind statistical “wrapping”, which results in degrading species and biology to statistical elements (Sand-Jensen 2007), suggests that what really matters is not biology itself but statistics, that the researchers get so focused on the statistics that they fail to adequately consider biological relevance of their results. Moreover, some authors seem to use the most sophisticated statistical methods available not to improve the quality of their research but to impress editors and readers. A paper that includes just a simple statistical test would be much less likely to be accepted than one using, for instance, a linear mixed model – apparently “the quality of science” seems to be somehow related to the complexity of a test.

Lack of biological information in a field study report should disqualify the submission right from the start. It is so severe an infringement of the rules of scientific documentation that such a paper should not even be sent to referees. Strangely enough, however, it is not so. More and more papers containing exclusively results of statistical tests are being published (Bijlsma 2010). Limitations on the journals’ volume and the attempts of editors to pack into a limited space as many papers as possible are offered as an “explanation” for publication of such papers. These justifications are completely implausible since they disregard the most fundamental question: why is it just the biological content and not the technicalities (e. g. statistics) that is expelled from the pages of the biological(!) journals?

The current tendency to on-line publishing, to produce both printed and electronic versions of the same paper, leaves the space limitation issue almost completely out of account. The on-line version can contain supplementary information of almost any length, at practically no extra cost. However, to be fully useful, the on-line appendices have to become fully integrated with the printed text; otherwise they do not satisfy the basic requirement of scientific publication, i.e. permanent documentation of research findings. Unfortunately, this condition is frequently not met yet, and the publishers do not guarantee that supporting materials will permanently be stored (“supporting information may also be displayed on an author or institutional website. In such cases, it is the author's responsibility to ensure that the supplied URL for the supporting information remains valid for the lifetime of the article”) or they will remain functional or coherent (”…will be published as submitted and will not be corrected or checked for scientific content, typographical errors or functionality. The responsibility for scientific accuracy and file functionality remains entirely with the authors….”). So the publishers take no responsibility for the materials which appear only in an electronic form, they guarantee the integrity of the printed version exclusively. We have to insist that publishers change this attitude, but before this has been attained, all information necessary to assess the quality and biological meaning of a study has to appear in print. To store full sets of raw data in publicly accessible archives, as proposed by e.g. Moore et al. (2010), should be encouraged as well. Indeed, access to such data would be very useful for any future re-analyses. Anyway, this presents a separate issue, going beyond the scope of this presentation, which is focused upon reporting in conventional biological journals.

Lack of proper credit to earlier work. Full appreciation of the contributions of predecessors constitutes one of basic requirements of scientific reporting. However, giving proper credit to the pioneers, to people who discovered something, proposed new concepts, or initiated novel approaches, becomes rather exceptional in the contemporary articles. Why do so many papers fail to properly acknowledge prior work, although the availability of literature has probably never been better than in the current era of the internet? It might be just the authors’ conviction that nothing older than ten years could be of any value. Yet, in some instances the reason could be dishonesty as well. If a paper appears to present completely novel, original, “hot stuff”, the authors might want to avoid admitting in the introduction that the idea was proposed as early as 1958 or that the results were reviewed in 1962. Instead, many authors tend to ignore earlier sources and refer only to the most recent papers published in the most prestigious journals. This attitude is very unfair to our intellectual predecessors but, as many people seem to accept such behaviour, it unfortunately spreads and allows mere followers to look like pioneers of fast-moving scientific frontiers.

 

What should be done?

I have discussed the problems raised above with numerous colleagues. Many of them (though not all) share these concerns, but are quite resigned, and do not see any chance of reversing these trends. They seem to think that we are all doomed to conform. The following excerpt form a letter is quite typical: “Actually I have rather ambiguous feelings about this, because I just spent half of the past weeks building sexy phrases, seeking buzz-words, and blowing up my past research to impress our science foundation... If you once decided to enter competition for funds, you have to dance with the party, don't you?”

We should not agree with this attitude, though. On the contrary, we must oppose these harmful tendencies. If we do nothing to regain high standards of documentation, we will lose our credibility as a legitimate scientists. There is still another reason to reinstate integrity in our work, namely our obligations towards the society. We are supported by the tax-payers’ money because we promise to provide some benefits, whether of intellectual (better understanding of animals or the living world at large) or of practical nature. We do not get money to build up our personal prestige, to become celebrities showing off on an intellectual vanity fair. Such abuse of trust and funds is really disgraceful: no one needs such “science”.

So, what could be done? The way out of this situation is technically not difficult. We simply have to make again the understanding of organisms and the living world the goal of scientific enquiry and to make all means (tools) subservient to this goal. Specifically, we have to “dethrone” publications, stop treating them as if they were the purpose of the scientific work. We should also strive to reward honest behaviour and to punish for keeping up appearances and dishonesty. Easier said than done, though. We face “the tragedy of commons” (Hardin 1968) type of problem. Although people are aware that the continuation of current trends will be devastating in the long run (Brischoux Cook 2009 and refs therein), yet they follow them, as they feel that in a short term sloppy reporting pays off, and they are afraid of being outcompeted if they behaved honestly. Obviously, this conviction is not entirely true and papers maintaining high standards are still being accepted and getting published. Readers are encouraged to apply the standard scientific approach, to test this option empirically by themselves: prepare your next scripts according to these standards (see Goals and means of scientific study – a reminder), include no pretence, no bogus language, no overstatements (Bijlsma 210). Submit them. I bet they will be accepted, of course provided they contain important biological findings.

The authors are not to be excused for having submitted pretentious texts, but referees, who do not assess such papers negatively, contribute to the current trends, too. They could easily change this, by paying more attention to the required standards of reporting (see above), by clearly pointing to the authors what was missing/wrong in their contributions, by demanding that the authors supplement/modify their texts to make them publishable. Writing such critical reviews takes usually more time and effort, but it is worth doing as – in my opinion – it is the best investment in the future of our science.

Editors are the most critical link in the system, because it is ultimately them to decide what deserves to be published and what constitutes a minimum standard of scientific presentation. And thus, journal editors (editorial boards) alone are in a position to enforce the necessary changes. Introducing these changes would demand from them: (1) modification of the list of requirements distributed in Guidelines for the authors to include aspects crucial for proper documentation; (2) extension of the list of questions which referees have to address to include these aspects as well, and (3) consistent rejection of all scripts which do not satisfy these minima. The point last mentioned would be the most critical. Additionally, to become credible, to convince the readers that from now on the only criterion for publication in a journal is the biological content and nothing else, the journals would have to stop advertising unsubstantial criteria; statements like “descriptive papers are not to be accepted” or “the journal now enjoys the highest impact factor” (Notkins 2008) should disappear from their pages.

It seems that editors and publishers (and authors alike) believe that if they kept high documentation standards, their journals would be overrun by their less scrupulous competitors, and the only way to survive is to “lower the bar”. These worries seem completely unfounded if one takes into consideration that submission rates are so high nowadays (probably the highest ever), that the editors are forced to reject the majority (often over 60%) of the submitted manuscripts, often not because of their low quality but just due to lack of space. So, the editors and editorial boards of zoological journals are encouraged to rethink their publishing policy and to raise the documentation standards of the papers to be published. I do believe, that working together, authors, referees and editors, we could jointly overcome the current shortcomings, so that the readers shall have an opportunity to read papers containing adequate descriptions of methods (allowing the study to be replicated), clearly presenting the biological results (with analytical details relegated to the background), giving proper credit to former work, honestly assessing biases and the external validity of the results. Let us work together towards this goal.

 

Summary

I have discussed my concerns related to the problems raised above with numerous zoologists. Many of them (though not all) share my worries. However, they seem quite resigned, as they see no chance of reversing the current trends. I think we have to oppose these harmful tendencies. If we do nothing to regain high standards of documentation, we will lose our credibility as a legitimate scientists.

To overthrow the tendencies discussed above, we have to make the understanding of organisms and the living world again the goal of scientific enquiry and to make all means (tools) subservient to this aim. Specifically, we have to stop treating publications as if they were the purpose of the scientific work. We shall also strive to reward honest behaviour and to punish for keeping up appearances and for dishonesty. The referees should pay more attention to the required standards of reporting (see below) by clearly pointing to the authors what is missing or wrong with their contributions, by demanding that the authors supplement/modify their texts to make them acceptable. Writing such critical reviews takes usually more time and effort, but it is worth an attempt. In my opinion, it is the best investment in the future of our field of science. Journal editors (editorial boards) are in the best position to enforce the necessary changes. This would demand from them: (1) modifying the list of requirements distributed in Guidelines for the authors to include aspects crucial for proper documentation; (2) extending the list of questions which referees have to address to include these aspects as well, and (3) consistent rejection of all scripts not fulfilling these minima. Additionally, to become credible, to convince the readers that from now on the only criterion for publication in a journal is the biological content and nothing else, the journals would have to stop advertising unsubstantial criteria. I do believe that working together, authors, referees and editors, we could jointly overcome the currently existing inadequacies, so the readers shall have an opportunity to read papers containing adequate description of methods  clearly presenting the biological results, giving proper credit to former work, honestly assessing biases and external validity of the results. Let us work together towards this goal.

 

Acknowledgments

The ideas presented herein have been influenced by correspondence and discussions with several people, including R. G. Bijlsma, N. Chernetsov N., M. CichoĹ„, K. Cockle, T. Coppack, F. R. Cook, P. F. Donald, A. Gosler, P. A. Gowaty, P. Isenmann, P. Jones, L. S. Johnson, G. Martin, E. Matthysen, M. Lambrechts, T. Nudds, G. Ritchison, E. Walters, participants of the Round Table Discussions “Bad Practice in Field Biology” held during the International Ornithological Congress, in Campos do Jordão (Brazil, 2010),) during the European Ornithologists’ Union Conference in Riga (Latvia, 2011). Although I have not always been following their advice, and some of them would disagree with several of my statements, I am nevertheless grateful to all of them for inspiration.

 

References

Arnold D. N. 2009. Integrity under attack: the state of scholarly publishing. SIAM News 42: 1-3.

Beehler B. M. 2010. The forgotten science: a role for natural history in the twenty-first century? J. Field Orn. 81: 1-4.

Bijlsma R. G. 2010. Ornithology from the tree tops. Ardea  98: 1-2.

Brischoux F., Cook T. R. 2009. Juniors seek an end to the impact factor race. BioScience 59: 638-639.

Donald P. F. 2010. Editorial. Ibis 152: 1-2.

Godfray Ch. 2010. President’s soapbox. Bul. Brit.  Ecol. Soc. 41: 2-3.

Hardin G. 1968. The tragedy of the commons. Science 162: 1243-1248.

Hołyński R. B. 2010. Taxonomy and the mediocrity of DNA barcoding - some remarks on Packer et al. 2009: DNA barcoding and the mediocrity of morphology. Arthropod Syst. & Phyl. 68: 143-150.

Lambrechts M.M., Adriaensen F., Ardia D. R., Artemyev A.V., AtiĂ©nzar F., BaĹ„bura J., Barba E., Bouvier J-C., Camprodon J., Cooper C. B., Dawson R. D., Eens M., Eeva T., Faivre B., Garamszegi L. Z., Goodenough A. E., Gosler A. G., GrĂ©goire A., Griffith S. C., Gustafsson L., Johnson L. S., Kania W., Keišs O., Llambias P. E., Mainwaring M. C., Mänd R., Massa B., Mazgajski T. D., Møller A. P., Moreno J., Naef-Daenzer B., Nilsson J.-Å., Norte A. C., Orell M., Otter K. A., Park C. R., Perrins C. M., Pinowski J., Porkert J., Potti J., Remes V., Richner H., Rytkönen S, Shiao M. T., Silverin B., Slagsvold T., Smith H. G., Sorace A., Stenning M. J., Stewart I., Thompson C. F., Tryjanowski P., Török J., van Noordwijk A. J., Winkler D., Ziane N. 2010. The design of artificial nestboxes for the study of secondary hole-nesting birds: a review of methodological inconsistencies and potential biases. Acta Orn. 45: 1-26.

Lindenmayer D. B., Likens G. E. 2011. Losing the culture of ecology. Bul. Ecol. Soc. Amer. 92: 245-246.

Mayr E. 1997. This is biology. Cambridge, Mass., Belknap Press.

Moore A. J., Mcpeek M. A., Rausher M. D., Rieseberg L., Whitlocks M. C. 2010. The need for archiving data in evolutionary biology. J. Evol. Biol. 23: 659-660.

Notkins A. L. 2008. Neutralizing the impact factor culture. Science 322:191.

Oliver J. E. 1991. The incomplete guide to the art of discovery. New York, Columbia University Press,.

Sand-Jensen K. 2007. How to write consistently boring scientific literature. Oikos 116: 723-727.

Schwenk K. 2010. Implementing the organismal agenda. BioScience 60: 673-674.

Selye H. 1964. From dream to discovery: On being a scientist. New York, McGraw-Hill.

Valkiũnas G. 2005. Peculiarities of distribution and pathogenicity of avian malaria parasites and other related Haematozoa. Alauda  73: 211-213.

Villard M.-A., Nudds T.D. 2009. Whither natural history in conservation research? Avian Cons. Ecol., 4: 6. <http://www.ace-eco.org/vol14/iss2/art6>.

Wägele H., Klussmann-Kolb A., Kuhlmann M., Haszprunar G., Lindberg D., Koch A., Wägele J. W. 2011. The taxonomist – an endangered race. A practical proposal for its survival. Front. Zool. 8: 25.

Weisman R. G.  2008. Advice to young behavioral and cognitive scientists. Beh. Proc. 77: 142-148.

Wesołowski T. 2011. Inadequacies in nestbox studies reporting – a review of problems. Acta Orn. 46: 13-17.