ABSTRACTS OF THE SECOND BIENNIAL CONFERENCE SPSP 2009

June 18, 19 and 20, 2009

University of Minnesota

 United States

Copyright is retained by the authors of the individual papers in this volume.

SPSP (Society for the Philosophy of Science in Practice), and;

CEPTES (Center for Philosophy of Technology and Engineering Science)

University of Twente

P.O.Box 217

7500 AE Enschede

The Netherlands

tel. +31 53 489 80 31

www.ceptes.nl

www.gw.utwente.nl/spsp/

ABOUT SPSP

SOCIETY FOR PHILOSOPHY OF SCIENCE IN PRACTICE

SPSP is an international group of philosophers and other scholars committed to thinking philosophically about science as it is actually practiced, not as an abstract and static system of knowledge. The phrase "science in practice" is intentionally ambiguous, covering both scientific practice and science as applied toward practical aims of life.

SPSP was founded in 2006, in response to a perceived neglect of science-in-practice in existing academic societies for philosophy of science. We invite participation not only from philosophers but scientists and practitioners in other scientific and technical fields such as engineering and medicine; philosophy is conceived broadly, to welcome input from historians, sociologists and other scholars of science and technology studies who are interested in grappling with fundamental questions about knowledge. Currently SPSP is a very informal organization, whose most major activity is the biennial conference series, which aims to provide a broad forum for philosophy of science in practice. Our inaugural conference was held at the University of Twente in the Netherlands in 2007, and the second conference will be at the University of Minnesota.

Aside from the conferences, we maintain a website and a membership list, through which we disseminate relevant information. We also encourage the formation of local branches. For further details on our activities, please see:

http://www.gw.utwente.nl/spsp/

Membership is free, and granted through a very simple application process:

http://www.gw.utwente.nl/spsp/membership/Membership%20Mailinglist.doc/

The SPSP Organizing Committee (currently Rachel A. Ankeny, Mieke Boon, Marcel Boumans and Hasok Chang) welcomes any suggestions from all members about the governance and activities of the Society.

PROGRAMME COMMITTEE

Mieke Boon

University of Twente

This email address is being protected from spambots. You need JavaScript enabled to view it.

Hasok Chang

University College London

This email address is being protected from spambots. You need JavaScript enabled to view it.

Marcel Boumans

University of Amsterdam

This email address is being protected from spambots. You need JavaScript enabled to view it.

Rachel A. Ankeny

University of Sydney

This email address is being protected from spambots. You need JavaScript enabled to view it.

With additional members for the programming of the Minnesota conference:

Douglas Allchin

University of Minnesota

This email address is being protected from spambots. You need JavaScript enabled to view it.

Tarja Knuuttila

University of Helsinki

This email address is being protected from spambots. You need JavaScript enabled to view it.

Julian Reiss

Erasmus University Rotterdam

This email address is being protected from spambots. You need JavaScript enabled to view it.

Andrea Woody

University of Washington

This email address is being protected from spambots. You need JavaScript enabled to view it.

CONTENT

Keynote Addresses 11

1. Mary Morgan: Facts in Practice – The Lives of Facts 11

2. Helen Longino: Pluralism and Practice: Thinking about Behavioral Research 11

Symposia 12

Symposium 1: The Nature and Epistemological status of the Phenomena Established Through Experimental Practices 13

1. Frédéric Wieber: Theoretical technologies in an “experimental” setting: empirical modeling of proteinic objects and simulation of their dynamics within scientific collaborations around a supercomputer 13

2. Léna Soler: Tacit Aspects of experimental practices: What Epistemological Consequences? 14

3. Emiliano Trizio:Contingency and the knowledge of phenomena 15

Symposium 2: From Principles to Practice: The Promises and Problems Associated with Implementing Psychiatric Genetic Research 17

1. Barbara A. Koenig, et.al. Addiction: A “Disease of the Brain”? 17

2. Kenneth F. Schaffner: Obtaining and Using Genetic Information about Mental Disorders to Improve Patient Satisfaction and Recruitment 18

Symposium 3: Causation in Human Behavioral Genetics 20

1. Kathryn Plaisance: Reframing Causal Questions in Behavioral Genetics 20

2.James Woodward: Causation in Biology: Stability, Proportionality, and Specificity” 21

3. Kenneth Waters: Causes That Matter 21

4. Eric Turkheimer: GWAS, EWAS, and Causation in Uncontrolled Systems 22

Symposium 4: The Economics of Scientific Pluralism 24

1. Rogier De Langhe: Increasing returns to adoption in science 24

2. Matthias Greiff: A model of consensus and dissensus 25

3. Jeroen van Bouwel: Epistemic democracy on offer. The politics and economics of scientific pluralism. 26

Symposium 5: Probing the Philosophical Consequences of Experimental Practices in Developmental Biology 28

1.

Alan Love: The Heterogeneity of Experimental Practices in Developmental Biology: Epistemological Implications 29

2.

Laura Gammill: Combining Embryo Manipulation and Genomics to Understand Neural Crest Induction 29

3.

Stephen Ekker: Studying Zebrafish Ontogeny with Insertional Mutagenesis 29

4.

Ann Rougvie: From Temporal Puzzles to Powerful Tools: microRNAs and Nematode Ontogeny 29

5.

David Zarkower: Answering Questions about Sex Determination with Conditional Gene- Knockouts 29

6.

Jonathan Slack: Model Organisms and Tissue Regeneration 30

7.

David Greenstein: Philosophy of Fertilization in C. elegans 30

Symposium 6: Causation and Evidence in the Historical Sciences 31

1. Carol Cleland: Common Cause Explanation and the Asymmetry of Overdetermination 31

2. Kevin Francis: Causal Explanations of Megafaunal Extinction 31

3. Derek Turner: Historical Trends and Philosophical Theories of Causation 32

Symposium 7: The Epistemic Roles of Organisms in Biological Practice 34

1. Staffan Mueller-Wille: Exemplars, Records, Tools: Organisms in Botanical Research, c. 1750-1850 34

2. Christian Reiss: The Mexican axolotl’s long history as an experimental animal 35

3. Sabina Leonelli and Rachel A. Ankeny: Re-Using Data, Re-Thinking Organisms: The Epistemic Impact of Databases on Model Organism Biology 35

Paper Abstracts 37

Douglas Allchin [J2]: Socializing Epistemics: Resolving the Ox-Phos Debate 38

Chiara Ambrosio [F3]: From Similarity to Homomorphism: Toward a Pragmatic Account of Representation in Art and Science, 1880-1914. 39

Hanne Andersen [C3]: Modeling Collective Belief in Science 40

Monica Aufrecht [B2]: Whose knowledge matters: The Context Distinction: controversies over feminist philosophy of science 41

Michael Barany [D3]: Computer experiments in harmonic analysis 42

Justin Biddle [B2]: Transient Underdetermination and Values in Science 44

Robyn Bluhm [F3]: Cognitive Subtraction Techniques and Neuroimaging Research 45

Mieke Boon and Tarja Knuuttila [H3]: How do models give us knowledge? Models as Epistemic Tools 46

Kirstin Borgerson [C3]: Amending and Defending Critical Contextual Empiricism: Lessons from Medical Research 48

Thomas Boyer [J3]: Coexistence of Several Interpretations of Quantum Mechanics and the Fruitfulness of Scientific Works 49

Alexandra Bradner [F3]: On the Very Idea of a Style of Reasoning 50

Evelyn Brister [E3]: Knowledge, Values, and Epistemic Authority in Land Management 51

John Capps [F2]: Pragmatic Truth and Scientific Practice 52

Hasok Chang [E1]: Historical Experiments, Lost Knowledge, and the Purpose of Science Education 53

Brendan Clarke [H2]: Inverting the Pyramid: A Reassessment of the Roles of Experiment in Evidence-Based Medicine 55

Sharon Crasnow [G3]: Evidence for Use: The Role of Case Studies in Political Science Research 56

Deepanwita Dasgupta [A3]: Scientific Discovery in an Asymmetrical Landscape: C.V Raman and the Building of a Research Tradition in Colonial India 57

Barry DeCoster [D2]: Improving Medical Explanations: Rethinking Explanatory Structure and Agency 58

Leen De Vreese [A2]: The Need For a Practical Concept of Disease 60

Jan De Winter [F2]: Explanations in Software Engineering: The Pragmatic Point of View 61

Emily L. Evans [D2]: Uncertainty and Public Health Research Ethics 63

Melinda B. Fagan [A4]: Collaboration, toward an integrative philosophy of scientific practice 64

Lily Farris [E2]: Uptake from the Commons: Tracking the access and use of public domain data released by the C. elegans Gene Knockout Consortium 65

James H. Fetzer [G3]: Assassination Science: Critical Thinking in Political Contexts 66

Maya J. Goldenberg [H2]: Critical condition: Can feminist accounts of evidence rehabilitate evidence-based medicine? 67

Marta Halina [A2]: Harmonizing Models and Phenomena: The Case of Aflatoxin 69

Susan C. Hawthorne [E4]: Science, Society, and Reinforced Intolerance of Mental Illness 70

Peter Heering [E1]: Styles of Experimenting as an Analytical Category for Scientific Practices 71

Robert Hudson [J3]: Realism and the Bullet Cluster 72

Kristen Intemann [H2]: Evidence for Use: The Case of the HPV Vaccine 73

Vincent Israel-Jost [G2]: Data processing in observation 74

María Jiménez-Buedo and Luis M. Miller [C4]: Experiments in the Social Sciences: The Relationship between External and Internal Validity 75

Marc Kirsch [E3]: In search of tools to bridge the gap between science and policy making. -On the notion of research programmes in sustainable development debates. 76

Henrik Kragh Sørensen [D3]: Are proofs mathematical experiments? Are mathematical experiments proofs? 77

Chris Mack [H3]: Mathematical Realism: A View from Industry 78

Deborah Mayo [C2]: A Philosophy of Evidence Relevant for Regulation 79

Leah McClimans [E4]: A Philosophical Framework for Patient-Reported Outcome Measures 80

Amy L. McLaughlin [F2]: Pragmatic Recommendations for Doing Science within One’s Means 81

Zahra Meghani and Jennifer Kuzma [G3]: Democratization of Risk Assessment of Converging Technologies 82

Barton Moffatt [B3]: A Reexamination of Biological Information from the Perspective of Practice 83

Bert Nederbragt [B3]: Cells that count. The standardizing of diagnostic tests for bovine mastitis. 84

Antigone M. Nounou [J3]: A story about gauge potentials, holonomies and time 85

Isabelle Peschard [H3]: On the Use and Assessment of Models: Forget about Representation 86

Elizabeth Potter [B2]: Hybrid Values in Epistemic and Non-Epistemic Practices 87

Erich H. Reck [F3]: Styles of Reasoning in Mathematics: the Case of Richard Dedekind 88

William Rehg [C3]: Crossing Boundaries: Contexts of Practice as Common Goods 89

Mark Risjord [J2]: Why A Nurse Knows Better: Standpoint Epistemology and Nursing Science 90

Stéphanie Ruphy [F3]: From Hacking’s plurality of styles of scientific reasoning to « foliated » pluralism, a new form of ontologico-methodological pluralism 91

Elizabeth Silver [J2]: Epistemology Under Pressure: Sacrificing Knowledge to Keep Big Pharma Under Control 92

Miriam Solomon [D2]: Three New Paradigms in Medical Epistemology 93

Aris Spanos [C2]: On Securing the Trustworthiness of Evidence: Modeling the Global Surface Temperature Data 94

Kent Staley & Aaron Cobb [C2]: Internalist and Externalist Aspects of Justification in Scientific Inquiry 96

Sarah Star[F2]: Revisiting Ontology and Its Consequences. 99

Sifting sound science from snake-oil: In search of demarcation criteria for science as actually practiced. 99

Mikaela Sundberg [D3]: Exploring and Accounting for Unexpected Simulation Results 100

Kimberly Thomas-Pollei [E2]: The Rise of Genetics: Producing Knowledge, Regulating Bodies, and Transforming Patients 101

Dana Tulodziecki [A2]: Reasoning about cholera: the inferential practices of John Snow 102

Joel Velasco [B3]: Parsimony and Model Selection in Phylogenetic Networks 103

Bradley E. Wilson [C4]: Nature as Laboratory: Experiments in Ecology and Evolutionary Biology 104

Monika Wulz [A3]: Social Concepts and Methods in Epistemology around 1930: Edgar Zilsel’s Sociohistorical Approach to Epistemology and his Concept of Science as an “Infinite Process” 105

Alison Wylie [A4]: Transformative Criticism in Archaeology: The Epistemic Rationale for Collaborative Practice 107

Muk Yan [C4]: Reliability and External Validity in Neurobiological Experiment 108

 

KEYNOTE ADDRESSES

 

1. FACTS IN PRACTICE – THE LIVES OF FACTS


Mary Morgan
London School of Economics

 

2. PLURALISM AND PRACTICE: THINKING ABOUT BEHAVIORAL RESEARCH

Helen Longino

University of Minnesota

 

SYMPOSIA

 

SYMPOSIUM 1: THE NATURE AND EPISTEMOLOGICAL STATUS OF THE PHENOMENA ESTABLISHED THROUGH EXPERIMENTAL PRACTICES

Léna Soler

Archives Henri Poincaré – LHSP

This thematic session understands “experimental practices” in a broad sense, including the practices of simulation that are, nowadays, so often involved in the conception and interpretation of what is finally viewed as an experimentally established phenomenon. The issue of the nature and epistemological status of such phenomena will be addressed along three lines. First, F. Wieber will examine how modeling and simulating practices in the history of a science of complex objects (e.g. proteins) are deeply intertwined with experimental data and share certain characteristics with experimental practices. The impact of these modeling and simulating technologies on the nature and status of experimental results in protein chemistry will be mentioned. Second, L. Soler will discuss some consequences of the involvement of tacit aspects in experimental practices, especially with respect to the condition of the substitutability of the experimenters, which has important implications for the universality and the inevitability of experimental facts, and thus for the nature of experimental knowledge. Third and finally, Emiliano Trizio will analyze the ways in which the practical turn gives some plausibility to the idea of the contingency of scientific results.

The three contributors of this thematic session are members of a French, interdisciplinary research group called “PratiScienS”, created in 2007 by Léna Soler at Nancy. The aim of this group is to draw a global and systematic account from the rich patchwork of specific analyses produced by the practical turn. For further details, see

http://poincare.univ-nancy2.fr/Activites/?contentId=2537&languageId=1

 

1. Theoretical technologies in an “experimental” setting: empirical modeling of proteinic objects and simulation of their dynamics within scientific collaborations around a supercomputer

Frédéric Wieber

Archives Henri Poincaré – LHSP, Nancy, France

This paper will examine, as a case study, some modeling and simulating practices in protein chemistry. In this field, theorists try to grasp proteinic objects by constructing models of their structures and by simulating their dynamical properties. The kind of models they construct and the necessity of performing simulations are linked with the molecular complexity of proteins. Two main types of problems emerge from this complexity. First, experimental problems arise when scientists want to perform on (and to adapt to) proteins some physical experiments (X-rays crystallography, NMR, neutrons scatterings…) and try to interpret the experimental data thus produced. Secondly, theoretical problems of computational complexity arise with the application of quantum mechanics to these excessively large objects. If the first type of problems has historically called for the development of theoretical approaches (in order to refine experimental data and to have access to certain properties of proteins that were very difficult to obtain experimentally), the second type, which is common to chemistry as a whole, has led protein scientists to develop a special kind of models, the so-called “empirical models” (in contrast to “ab initio calculations”). They were aided by massive use of computers after 1960, to construct and extend the use of these models. In the 1970’s, these computerized models were incorporated into a simulation method termed “Molecular Dynamics” (MD) elaborated in statistical physics. This has led to greater insights about experimentally inaccessible dynamical properties of proteins.

The computer, as a technological instrument, has influenced in a major way the form of the models that have been constructed. Its limited computational capacities have also influenced the way MD simulation method has been applied in the case of proteins. That’s why I refer to these modeling and simulating activities as “theoretical technologies”. The development of these theoretical technologies must be understood in an “experimental” setting. To show this, I will first analyze the nature of the models actually constructed, in order to emphasize the work of experimental data assembling and estimations (due to the empirical problems early mentioned) necessary in this modeling activity. I will then examine the adaptation of the MD simulation method to proteins. For this adaptation, specialists of the MD method (from statistical physics) collaborated with protein theorists, notably, in Europe, within a particular institution. This effective collaboration has been possible thanks to the computing facilities of this computing center. I will thus emphasize the way computer’s accessibility has led to practical collaboration among scientists, and the importance of the tacit dimensions of simulation’s production during this time of first developments. A parallel between experimental practices (around big instruments) and simulating practices (around supercomputers) can then be proposed.

If these two main lines of analysis indicate the potential hybrid nature (between theory and experiment) of these modeling and simulating activities, they will also show how the technological nature of these practices has an effect on the status of the results they produced. Finally, the impact of these technologies on the nature and status of experimental results in protein chemistry will be mentioned.

 

2. Tacit Aspects of experimental practices: What Epistemological Consequences?

Léna Soler

Archives Henri Poincaré – LHSP, Nancy, France

Since several decades, many sociologists and philosophers of science, especially the so-called ‘new experimentalists’, have stressed the need for detailed studies of real, ongoing experimental practices, and claimed that a renewed conception of science results from such an approach. Among the new objects of interest emerged from laboratory studies, an important one is the tacit dimension of scientific practices. Harry Collins, in particular, insisted that irreducibly tacit presuppositions and skills are inevitably involved in experimental practices, and that these tacit resources play an essential role in the stabilization of successful scientific achievements. The opacity of experimental practices has been analyzed in different ways, but on the whole, it has been claimed to have harmful epistemological consequences with respect to crucial issues, such as the nature of experimental facts, scientific realism, scientific rationality, and the contingency of what acquires the status of an established scientific result according to practitioners’ eyes. Now, such claims remain highly controversial. The aim of this talk is to revisit this question on the grounds of some new insights issuing from the study of scientific practices. First a brief overview of the possible and controversial epistemological consequences of tacit aspects in experimental practices will be presented. Then, one of these consequences will be discussed in more details, namely the condition of the experimenters substitutability. This condition is traditionally viewed as a necessary feature of any good science, but it is supposed to be seriously shaken, if not completely invalidated, when tacit resources are taken into account. For the purpose of this discussion, two kinds of opacity of knowledge-producing practices will be introduced, with the intention to achieve a better understanding of what is involved under the heading of ‘tacit aspects’: an opacity relative to descriptions, and an opacity relative to justifications, of what has been done. Correlatively, two kinds of experimental configurations will be distinguished, namely the stabilized versus the non-stabilized experimental practices. Since very different intuitions and epistemological readings are commonly associated with these configurations, each of them will be analyzed in turn, before considering what they can teach us about the epistemological implications related to the substitutability clause. Finally, some consequences of this analysis with respect to the issue of the contingency of experimental facts will be sketched, with the aim to make sense of the counter-intuitive idea that scientific achievements could be, at the very same time, both truly robust and, nevertheless, contingent in a non trivial sense.

 

3. Contingency and the knowledge of phenomena

Emiliano Trizio

University of Lille 3 (France)

Archives Henri Poincaré (Nancy, France)/Archives Husserl (Paris)

The aim of this paper is to explore the relations existing between the new ideas and insights resulting from the so-called practical turn in philosophy of science and the problem of contingentism. Contingentism, as it has been recently defined by Ian Hacking, is the claim that the history of a particular field of science could have taken a different route from the actual one, and that the resulting imaginary science could have been both as successful as the real one and, in a non-trivial way, incompatible with it. Inevitabilism consists in the denial of this claim. Unfortunately, this complicated issue hasn’t so far received enough attention within philosophy of science, and its specific importance hasn’t been fully acknowledged. Now, it is not surprising that the inquiries issued from practical turn should address this problem, given that one of their key methodological choices consists precisely in the shift of focus from the analysis of constituted results and their evidential basis to the study of the generative process leading to them. Indeed, the contingency thesis is linked to the idea that, in order to evaluate the epistemic and ontological status of scientific claims, the very fact that those claims are the result of a specific, non-repeatable historical generative process must be taken into account. In this paper, while acknowledging that it is extremely hard to give an argument that establishes the validity of the contingency thesis in a compelling way, I will argue that this thesis can be both clarified and, up to a certain extent, made plausible by the fact that what is commonly regarded as a phenomenon emerges though a complex process of stabilization of experimental practices.

First, I show that the contingency thesis implies a basic multiplicity thesis, that is the claim that, given a certain subject matter, different and incompatible successful accounts of it are possible. I will then analyze the notion of scientific “success” by considering three different characterizations of it: 1) truth, 2) adequacy to the phenomena, 3) robust fit obtained through the mutual adjustment of the several elements of scientific practices, one of which are the phenomena. On the grounds of several works made by the advocates of the practical turn, I retain the third characterization of scientific success and argue that the role played by creativity in scientific activities and the fact that there is a multiplicity of paths that researchers can legitimately follow in order to obtain a robust fit jointly support a qualified version of contingentism. The necessary qualification will result from the difficulties to given a clear content to the clause “equally successful”, when the chosen characterization of phenomena is the one resulting from the “robust fit” approach. Subsequently, I will try to understand in what way contingentism, thus construed, implies that scientific results are history-dependent and what are the consequences of this fact for the rationality of scientific enquiries.

 

SYMPOSIUM 2: FROM PRINCIPLES TO PRACTICE: THE PROMISES AND PROBLEMS ASSOCIATED WITH IMPLEMENTING PSYCHIATRIC GENETIC RESEARCH

James Tabery

University of Utah

In this thematic session, participants will explore the opportunities and dilemmas associated with implementing psychiatric genetic research in various domains: addiction diagnosis and therapy, patient care and recruitment, and genetic screening. The general theme is a focus on the value of incorporating psychiatric genetic research into these domains tempered by a cautionary tale about the need to get that incorporation right. Participants include practicing scientists (Koenig, Dingel, Robinson, and McCormick) as well as philosophers of science (Schaffner and Tabery).

 

1. Addiction: A “Disease of the Brain”?

Barbara A. Koenig, Depts of Psychiatry and Medicine, Program in Professionalism and Bioethics, Mayo Clinic and College of Medicine

Molly J. Dingel, University of Minnesota Rochester

Marguerite Robinson, Program in Professionalism and Bioethics, Mayo Clinic and College of Medicine

Jennifer B. McCormick, Depts of Medicine and Health Sciences Research, Program in Professionalism and Bioethics, Mayo Clinic and College of Medicine

Historically, addiction has been understood as a sin, a crime, a bad habit, a moral weakness, and, most recently, “a disease of the brain.” Though the social influences on drug use is well documented, inquiries into the biological influences of addiction are becoming routine. A focus of addiction on the biological shifts attention away from the complex social web of our relationships with friends and family, our economic situations, etc.  A potential harm of focusing on biological etiology stems from a concept of addiction that is disassociated from social context. For example focusing on genetic testing may lead one to over-emphasize pharmaceutical “magic bullet cures” and under-emphasize, and under-fund, more traditional therapies and prevention strategies. In addition, genetic research may fundamentally change our conception of deviance, our identities, and our susceptibility to drug use as something embedded not in our social context, but in our biological make-up. 

Reductionist approaches dominate scientific research, not because researchers fail to understand the critical role of the social environment when interrogating the etiology of addictive disorders, but because the tools provided by science promise new insights into a particularly challenging and recalcitrant health problem. How can we best approach the ethical and policy challenges that flow from the “geneticization” of addiction studies?  What are the challenges of translating addiction genetics findings into practice? Can harms be minimized? How can we manage the “hype” created by simplistic media reporting of genetic findings? 

Although research is important in elucidating scientific understanding of addiction, several unintended consequences emerge.  Research results may be co-opted for financial gain.  For example the tobacco industry may use this research in its strategy to deflect legal and social responsibility for substance abuse-related illness away from the substance and onto the free choice of consumers claiming that some may escape the addictive qualities of nicotine and therefore smoke “safely.”  New direct-to-consumer genetic testing companies may capitalize on early scientific findings seeking to market tests based on faulty claims about their accuracy and predictive utility. Both these industry strategies stem from an overly-simplistic and optimistic understanding of the complexities of genetic studies of smoking behavior.  Because of the small effect conveyed by any one gene, and because genetic influence is synergistic among many genes and myriad environmental influences, it is not possible to identify anyone at low risk from the adverse consequences of smoking.

In light of these considerations, we offer two broad recommendations: First, genomic research on addiction behavior is a valid avenue of scientific inquiry. But it is essential that this line of inquiry not be conducted independently of the broad social context affecting the translation of findings into practice.  Results should not be promoted in a reductionist, simplified—“a gene for ...”—manner.  To meet their responsibility to minimize unintended harms, scientists, policy makers, journalists, and ethicists must articulate the full complexity of this research.  Second, we must not let our support of genomic research overshadow the quest to elucidate further social and environmental factors contributing to the etiology of addiction.

 

2. Obtaining and Using Genetic Information about Mental Disorders to Improve Patient Satisfaction and Recruitment

Kenneth F. Schaffner

University of Pittsburgh

This presentation asks how might integrating genetic information about mental disorders be most useful for diagnosis, prognosis, and treatment? Useful, that is, both for the patient in relation to his or her fundamental values and life plan, as well as for the patient’s family. Such genetic information is situated within a biopsychosocial framework generally, but implemented via what is called the “Comprehensive/Integrated Diagnostic Model” (C/IDM) of the World Psychiatric Association’s International Guidelines for Diagnostic Assessment (IGDA) (WPA, 2003), as well as the emerging WPA initiative on a “Person-centered Integrated Diagnostic Model,” or PID (Mezzich, 2007).  The talk considers both current and near-future uses of genetic information, as well as the rapidly growing psychiatric studies of gene-by-environment interactions, and covers both research and treatment contexts. In practice, the C/IDM and its companion PID is probably best thought of not only as a diagnostic model but as a model for patient care. Specific instruments are in the process of being adopted or developed to provide empirical assessments. These include a genetically informed version of the PID and a genetically-oriented form of the MacCAT-CR.  Instruments would also be developed for determining cultural and sub-cultural sensitivities to obtaining and employing such genetic information, whether these group issues are based on ethnic or economic factors. One major aim of this talk is to indicate how this nascent approach can improve the recruitment process including affected individuals and family members.

Our prime example within the general area of psychiatric genetics will be schizophrenia and schizophrenia spectrum genetics. In the area of schizophrenia, a series of studies combining linkage and association analyses in the same family sets have identified promising candidate genes (DTNBP1, NRG1, DISC1, DAOA) (Allan, Cardno, and McGuffin, 2008), and dysbindin and neuregulin 1 are particularly strong candidates for testing and follow-up. Other candidates are of synergistic interest, including COMT, and additional genes are likely to emerge through the application of genome wide association studies (GWAS) of schizophrenia that have in the past two years become the preferred  methodology in both common genetic disease and in psychiatric genetics.

The “Comprehensive/Integrated Diagnostic Model” C/IDM and “Person-centered Integrated Diagnostic Model,” (PID) approaches, which constitute the general framework of this talk, facilitate an integration of standard pre-existing nomothetic-based classifications with ideographic narrative accounts. This issue is a ripe area for further investigation in connection with the C/IDM, as evolving DSM-V and ICD-11 drafts strive to incorporate useful genetic information.  Recent publications (Mezzich, 2007) have outlined the ways in which an in-progress project to write a Guide for PID will intercalate with ICD-11 developments. In his (2007) Mezzich specifically combines both “collaboration with WHO and various WPA components towards the development of the WHO ICD-11 Classification of Mental Disorders” with the Person-centered Integrative Diagnosis (PID) research project.

 

SYMPOSIUM 3: CAUSATION IN HUMAN BEHAVIORAL GENETICS

Kathryn Plaisance

Leibniz University of Hannover

The talks in this session aim to reframe the philosophical debate about causation in behavioral genetics – and biology more generally – in a way that attends to scientific practice and the aims of particular scientific disciplines. In the first talk, some background will be given with respect to previous philosophical discussions of causation in behavioral genetics. In addition, a more fruitful epistemological approach to questions of causation in behavioral genetics will be suggested, which takes into account the methodological limitations faced by practitioners. At the end of the first talk, additional questions will be raised in order to provide some important distinctions regarding causes and causal explanations. Each of these questions will be taken up and addressed in detail in the three talks that follow, with a particular emphasis on actual scientific practice (to that end, this session includes a practicing behavioral geneticist).

 

1. Reframing Causal Questions in Behavioral Genetics

Kathryn Plaisance

Leibniz University of Hannover

One of the primary aims of human behavioral genetics is to determine the extent to which genetic and environmental differences contribute to individual differences in various psychological traits. To this end, behavioral geneticists parse phenotypic variation according to its sources, a method that is typically referred to as analysis of variance (ANOVA). Many philosophers – and likeminded scientists – have criticized this method, arguing that analysis of variance is not the analysis of causes. This criticism can be found, for example, in Richard Lewontin’s seminal paper, “Analysis of Variance and Analysis of Causes” (1974), and is repeated in many subsequent philosophical accounts (e.g., Kaplan 2000; Northcott 2006). In particular, Lewontin argues that ANOVA is useless for the kinds of causes that are important for purposes of improvement and “indeed has no use at all.” In effect, Lewontin asks, “Can ANOVA provide causal information?” and answers “no”.

I contend that this question is ill-formed to begin with. Rather than asking whether a particular method can provide causal information, and then citing limitations to the method, philosophers of science should be asking, “To what extent are these methods able to provide valid causal claims? That is, how well do they fulfill generally accepted criteria for good explanations in light of the limitations they face?” The reason for reformulating the causal question in this way is that, epistemologically speaking, it is better to ask what evidence there is for a particular causal claim, and how certain we can be that it is true, rather than merely asking whether or not it provides causal information full stop.

Additionally, in order to better understand scientific practice, it is useful to raise other

questions about the causes that a particular methodological approach is able to identify or the causal explanations it is able to provide. For example, what kinds of causes are identified? Are these the causes we are interested in? Is the behavior to be explained amenable to study by the methods being used? These are the just questions that are taken up by the three talks that follow.

 

2. Causation in Biology: Stability, Proportionality, and Specificity”

James Woodward

California Institute of Technology

Philosophical discussions of causation have often tended to focus, understandably enough, on finding criteria that distinguish causal from non-causal (or merely correlational) relationships. There is, however, another project, also belonging to the philosophy of causation, that has received somewhat less attention. This is the project of elucidating and understanding the basis for various distinctions that we make among casual relationships — My talk in intended as a contribution to this second project. In particular, I will focus on certain causal concepts – stability, proportionality and specificity --- that may be used to mark distinctions among causal relationships in biological contexts. I will attempt to show how each of these may be captured within a broadly interventionist framework for thinking about causation.

The stability of a causal relationship between cause variable C and effect variable E has to do with the extent to which this relationship remains stable or unchanged as various other background factors change. Proportionality has to do with the extent to which cause and effect variables are characterized in such a way that possible changes in the state of the cause are related to possible changes in the state of the effect. Specificity has to do both with the extent to which the cause is related to just one effect rather than many different effects (relative to some specified set of alternatives) and also with whether fine-grained influence in the sense of David Lewis is present. These causal notions will be illustrated by examples drawn from genetics, immunology and epidemiology. The interrelations among these different causal notions and their connection to the notion of a causal mechanism will also be explored.

 

3. Causes That Matter

Kenneth Waters

University of Minnesota

Behavior, like complex phenomena more generally, involves a multiplicity of causes. In the context of such phenomena, causal analyses typically focus on some causes, and not others. On what basis are selections among causes made? Social scientists often select the causes that are actually making the difference in the populations under investigation, sometimes because those are the causes the scientists find interesting, sometimes because those are the causes that are most readily identifiable, and sometimes for both reasons. Perhaps, however, the causes that matter to many of us in society are not the causes that are actually making the difference in a population. Perhaps the causes that matter to us are one that could make a difference if we intervened on them.

I will use behavioral genetics as a case study to illustrate how the causes that matter to us might not be the causes that are revealed by certain kinds of scientific practices. I will begin by distinguishing between two kinds of causes in populations (or two kinds of ‘population level’ causes): actual difference makers and potential difference makers. I will show that the practice of behavioral genetics, which observes the effects of causes that actually differ in the populations under study, can identify actual difference makers. But since this practice does not observe the effects of intervening on causes that do not actually differ, it cannot identify potential difference makers.

I will argue that the causes that matter to parents, educators, and helping professionals might well be potential difference makers rather than actual difference makers. This mismatch between causal interests and scientific practice, rather than the alleged causal confusion on the part of practicing scientists, is what is really at issue. And this is not just an issue for behavioral genetics, but for all sciences that are practiced in ways that identify actual difference makers and not potential ones.

 

4. GWAS, EWAS, and Causation in Uncontrolled Systems

Eric Turkheimer

University of Virginia

A decade ago, as a half-century of population-based modeling of twin and adoption studies were giving way to the Human Genome Project and the era of measured DNA, I argued against the widespread optimism that molecular genetics would vindicate the twin studies of the previous era, replacing statistically substantial but causally vague variance components with well-specified etiological models leading from genes to neurons to behavior. At the time, my prediction had a distinctly Luddite ring to it. Why would anyone bet against the inexorable progress of science? My gloominess on the topic was in sharp contrast to the optimistic, not to say hegemonic, claims of most genetic researchers at the time.

Progress since then has been, it is safe to say, disappointing. It is not that associations between individual alleles and specific behaviors have been hard to find. On the contrary, we are awash in them: thousands of linkages and associations with behavior have been identified. But despite the myriad linkages and associations between alleles and complex human traits that have been reported, three persistent limitations have proved very difficult to overcome: (1) The reported associations are very small, in the sense that they each explain a tiny proportion of the overall variability, and collectively not much more than that; (2) The associations don't replicate very well; and (3), in part as a consequence of the first two, the various small associations between genes and behavioral outcomes haven't added up to etiological explanations of behaviors and especially behavioral disorders.

A recent series of papers in Nature Genetics described the results of the results of several combined Genome Wide Association Scans (GWAS) of height. Although height is a highly heritable trait that can be measured with near-perfect reliability, and the studies employed the latest SNP technology in massive samples numbering in the tens of thousands, only a handful of variants related to height were identified. The SNPs that were shown to be related to height collectively accounted for under 5% of the variation. I explore the reasons for this surprising result, focusing on the analogy between GWAS and the long-standing search for environmental causes of behavior, which might be called EWAS, for Environment Wide Association Scan. The problems of nonexperimental causal inference faced by GWAS researchers were confronted long ago by mainstream social science, and were never fully overcome; they almost certainly cannot be fully overcome. Likewise, the statistical and methodological procedures that are employed in the genome project to control for multiple statistical tests and population stratification have been used in social science for decades. Understanding their successes and failures in that domain helps frame a reasonable set of expectations for the genomics of complex human characteristics.

 

SYMPOSIUM 4: THE ECONOMICS OF SCIENTIFIC PLURALISM

Rogier De Langhe

Research Foundation – Flanders (FWO), Brussels

Scientific pluralism is a normative endorsement of a plurality of views. Scientific

communities can provide this endorsement in a number of ways, each involving a different division of labour within or across communities. The division of labour is a central theme in social epistemology, described by Helen Longino as “the question whether and when to pursue research that calls a community consensus into question or to pursue research that extends the models and theories upon which a community agrees.” As such, the tension running through this session is the extent to which a scientific community should be specialising within a perspective or diversify across perspectives.

To study the division of cognitive labour in science, several models have already been put forth such as Hull(1988), Kitcher(1990) and Goldman & Shaked(1991). These models are usually based on economics and tend to rely on the idea of an invisible hand. Building on this line of work, this session turns around a model of scientific activity based on the formal work of Santa Fe Institute institutional economist Brian Arthur. As such, like its predecessors, it is based on economics. Unlike them, however, it explicitly rejects the invisible hand mechanism. Rogier De Langhe will introduce this model, situate it in the philosophical literature and point out its specific implications for the division of labour in science. Subsequently, economist Matthias Greiff will illustrate its dynamics against the background of episodes in the history of science and point out the model’s potential as a simulation platform for institutional design. Consequently, Jeroen Van Bouwel will critically assess the merits and defects of such economic models of science in light of the questions of scientific pluralism and science policy.

 

1. Increasing returns to adoption in science

Rogier De Langhe

Research Foundation – Flanders (FWO), Brussels

Social epistemology is characterized by a recognition of the social dimension of knowledge acquisition. That the gathering of knowledge takes place in a social environment means that it can fall prey to a social dilemma. These are situations where optimal individual behaviour does not result in an optimal outcome for the collective. A possible solution to this dilemma can be to rely on the invisible hand: if individuals are given the freedom to pursue their own business, then the collective good will result.

Such a solution is what is found in one of the most influential works on the division of labour in science: ‘The division of cognitive labour’ by Philip Kitcher(1990). The basic problem of that paper is a social dilemma which Kitcher calls the “CO-IR-discrepancy”: the mismatch between a scientist’s individual rationality (IR) and the ideal balance between specialisation and diversity, the community optimum (CO). If scientists were all to pursue the same path, namely that which is best supported by the available evidence, there is no diversity and the community optimum is unlikely to be reached, provided that, as Kitcher assumes, full specialisation is undesirable. Kitcher solves the discrepancy by introducing social and other factors, such as greed and stubbornness, which scatter scholarly attention and thus bring diversity into the scientific community. He then reformulates rationality from pursuing the problem-solving method which intrinsically has the best prospects of success irrespective of what others in the community are doing to choosing to belong to a community in which the chances of being the first to discover the correct answer are maximized. The latter case takes into account the distribution of research effort already present in the research community: prospective individual returns decrease as the number of scientists following a certain path rises (i.e. there are decreasing returns to adoption). As a consequence, it becomes rational for the individual to pursue diversity, thus solving the CO-IR discrepancy.

This paper starts from the same problem as Kitcher. However, it develops an alternative view on the dynamics of science than the one implicit in Kitcher. This is done by starting from an analogy between science as a distributed process and the dynamics of network industries. The dynamics of networks is such that the bigger the network, the greater the network benefits (or ‘network externality’, cf. Farrell & Saloner 1985). For example, a telephone becomes more valuable as more people have one. As a consequence, Kitcher’s assumption of decreasing returns to adoption is abandoned and replaced by increasing returns to adoption. The paper then shows how this change has great effect on Kitcher’s views on the division of labour in science, how institutional design should proceed and what the invisible hand can do for us.

The views argued for in this paper will be developed in more formal detail by institutional economist Matthias Greiff, who will present a model of the division of labour in science based on the dynamics of network industries.

 

2. A model of consensus and dissensus

Matthias Greiff

University of Bremen

This paper constructs a formal model describing the dynamics of science under increasing returns. The model restates the problem of the division of labour in science as an attempt to bring in increasing returns to the dynamics of science. By assuming increasing returns this model differs significantly from Kitcher (1990) and is closer to a series of models originally developed by Brian Arthur (1994). While Arthur's models describes the economics of technology choice, it is demonstrated that a similar model can be used to replicate the dynamics of science.

An abstract computational agent-based model is built in which there is a population of

heterogeneous scientists. Their main activity is to produce evidence. By producing evidence (e.g. writing a paper) each scientist employs the methods of a particular 'school of though', 'paradigm', or cluster. The decisions at the micro-level produce a particular pattern at the macro-level. Several 'schools of thought', or clusters exist side by side (diversity), or one cluster gets dominant (specialisation) with several smaller clusters relegated to the fringes.

The individual scientist does not directly react to an objective world but to the available evidence produced by his fellow scientists. He relies on his colleagues' testimony. His decision - specialize or diversify - is based on his own preferences and the available evidence produced by his fellows. This introduces herd behaviour where, under certain conditions, uniformity of opinions emerges as a result of positive feedback effects. In Arthur's model the corresponding situation would be a lock-in: all producers adopting the same (potentially ineffective) technology. In the dynamics of science, however, we rarely find uniformity of opinions. There are always some sceptics, opposing conventional wisdom. This is taken into account and the model is tuned so that complete lock-in is only a special case. In the more general case of the model we see a dominant cluster besides several small ones. The process in which one cluster gets dominant is path-dependent and nonergodic. Random events are not averaged away as time passes, and small fluctuations matter for the selection of the dominant cluster. Although we cannot predict which cluster will get dominant, we know that one cluster will get dominant for sure, hence the process is predictable.

By modeling the scientist's choice as a nonlinear Polya process we take into account

increasing returns. The strength of the increasing returns effect depend on available evidence as well as on the strength of clusters. Within stronger clusters scientists are more likely to conform to the accepted methods and specialise.

In addition to the effect of a cluster's strength and evidence there are the scientist's preferences. By making a contribution to a cluster a scientist invests time and money. These sunk costs lead to a change in the scientist's preference, making the agent more likely to contribute to the same cluster again. Or more pithily: higher sunk costs make it more likely that scientists specialise. By calibrating the model both can be explained, the formation of consensus and the dissolution of consensus. The parameter space of the model is explored and it is shown how institutional factors and policy influence the dynamics of the model. The particular case under scrutiny in this talk is to investigate policies that can reduce the CO-IR discrepancy. Using some examples from the history of economic thought, the model is linked to particular episodes in the history of science.

 

3. Epistemic democracy on offer. The politics and economics of scientific pluralism.

Jeroen van Bouwel

Ghent University

Given that scientific practice can be considered as a social process, all of the social sciences can in principle provide us with conceptual tools to analyse science. In this paper, I will compare the use of some economic models to comprehend scientific activity – aspiring to unveil the logic of science – with accounts that rely on democratic models elaborated within political science. This will be done in two movements.

First, the understanding of scientific pluralism implicit in the economic DLG-Model – central to this session – is analyzed and criticized. The critique will be formulated by comparing the DLG-Model with a more political/democratic model, i.e., my version of scientific pluralism labelled agonistic pluralism (cf. Van Bouwel, forthcoming). It is shown how these different understandings of scientific pluralism help us to elucidate debates within economics practice (cf. Davis, 2008).

Second, a comparison between an influential work in the economic modelling of scientific practice, i.e. Philip Kitcher’s The Advancement of Science, on the one hand and Helen Longino’s The Fate of Knowledge on the other, articulates the shortcomings of economic models and demonstrates the aptitude of political/democratic models to comprehend scientific pluralism (and its consequences for science policy). This is illustrated by presenting Longino’s Critical Contextual Empiricism, a procedural social epistemology, as an instance of epistemic democracy (to be understood as a democratic conception of knowledge and science, not as the epistemic conception of democracy common in political philosophy).

Through these two movements, I hope to shelter social epistemology from economics

imperialism and advance epistemic democracy, while, simultaneously, assuring pluralism in the analysis of scientific practice.

·

Arthur, Brian. Increasing Returns and Path Dependence in the Economy. Ann Arbor: University of Michigan Press (1994)

·

Davis, John, “The turn in recent economics and return of orthodoxy” Cambridge Journal of Economics, 32, 2008, pp. 349–66

·

Farrell, Joseph & Saloner, Garth, “Standardization, Compatibility, and Innovation, The RAND Journal of Economics, 16(1), 1985, pp.70-83

·

Goldman, Alvin & Shaked, Moshe, “An Economic Model of Scientific Truth and Truth Acquisition”, Philosophical Studies, 63, 1991, pp.31-55

·

Hull, David. Science as a Process. Chicago: University of Chicago Press

·

Kitcher, Philip, “The Division of Cognitive Labor”, Journal of Philosophy, 87(1), 1990, pp.5-22

·

Kitcher, Philip. The Advancement of Science. New York: Oxford University Press (1993)

·

Longino, Helen. The Fate of Knowledge. Princeton: Princeton University Press (2002)

·

Van Bouwel, Jeroen (ed.). The Social Sciences and Democracy. Basingstoke: Palgrave Macmillan (forthcoming).

 

SYMPOSIUM 5: PROBING THE PHILOSOPHICAL CONSEQUENCES OF EXPERIMENTAL PRACTICES IN DEVELOPMENTAL BIOLOGY

Alan Love

University of Minnesota

Most recent philosophical reflection on developmental biology has focused on theoretical issues, such as the preference for genetic explanations or the use of informational metaphors (Keller 2002; Love 2008; Robert 2004; Weber 2005). The experimental practices of developmental biology have been less visible in these analyses but harbor numerous possible consequences. For example: How does the diversity of genetic practices (e.g., mutant screens, knockdowns, and clonal analysis) have an impact on preferences for genetic explanations? How do different experimental practices that involve distinct technical skills and abilities (e.g., microsurgery and in situ hybridization) come together in the production of knowledge about ontogeny? Do these practices impinge upon the meaning of concepts (e.g., the classification of morphogenetic

processes: condensation, epiboly, invagination, etc.)? Do they guide reasoning along particular lines (e.g., via the choice of a model organism) or do they encourage specific factors to be overlooked or underestimated (e.g., the recently identified roles of small silencing RNAs)?

New techniques with the potential to transform the study of ontogeny appear at a surprising rate (e.g., Keller et al. 2008). The significance of the experimental practices found in developmental biology has been recognized in several recent Nobel prizes, including the discoveries concerning early embryonic development using sophisticated mutant screens (1995 – Physiology/Medicine), the discovery of gene silencing by RNA interference (2006 – Physiology/Medicine), and the discovery and development of green fluorescent protein (2008 – Chemistry). This is not only a recent phenomenon of molecular biology but was also evident in earlier awards, such as the discovery of the organizer effect by Hans Spemann using tissue grafting (1935 – Physiology/Medicine). Many of the embryological practices that were adopted decades ago remain central to current experimentation (e.g., fate mapping or the established normal stages for embryos), and some of these practices have been modified or updated (e.g., from mechanical to laser ablations, or from wax models to 3-D embryo reconstruction). Understanding the consequences of these practices requires tracking their heterogeneity and dynamic interaction in the present, as well as their evolution through time.

The abundance of experimental practices found in developmental biology and their associated technical details make it a daunting task to probe their philosophical consequences. Ideally, analyzing the practices side-by-side with the scientific practitioners would be a fruitful endeavor. To this end our symposium will adopt a hybrid format of presentations and discussion with developmental biologists as primary participants. The initial paper of the session will outline possible links between experimental practices and various conceptual issues. In addition to the items mentioned above, several other epistemological questions will be raised, including: How do different experimental practices provide ‘justification’ for various knowledge claims in developmental biology? Is it preferable to use multiple practices to support these knowledge claims? Why or why not? How do practices generate new phenomena or transform those already under scrutiny? How are observations made (e.g., naked eye, fluorescent microscopy, or other means) and for what purposes (e.g., confirmation, exploration, explanation, etc.)? How are different practices related to the core questions in the study of ontogeny (e.g., axial determination, differentiation, or morphogenesis)? With these questions explicitly framed, six developmental biologists will make short presentations on specific experimental practices they have used in their own research. These presentations will then be followed by an open roundtable discussion where the philosophical issues and experimental practices can be compared and contrasted across disciplinary boundaries in order to expose novel insights and suggest new avenues of research for philosophy of science.

Alan Love

University of Minnesota

 

The Heterogeneity of Experimental Practices in Developmental Biology: Epistemological Implications

Laura Gammill

University of Minnesota

 

Combining Embryo Manipulation and Genomics to Understand Neural Crest Induction

Stephen Ekker

University of Minnesota

 

Studying Zebrafish Ontogeny with Insertional Mutagenesis

Ann Rougvie

University of Minnesota

 

From Temporal Puzzles to Powerful Tools: microRNAs and Nematode Ontogeny

David Zarkower

University of Minnesota

 

Answering Questions about Sex Determination with Conditional Gene- Knockouts

Jonathan Slack

University of Minnesota

 

Model Organisms and Tissue Regeneration

David Greenstein

University of Minnesota

 

Philosophy of Fertilization in C. elegans

·

Keller, E.F. (2002) Making Sense of Life: Explaining Biological Development with Models, Metaphors, and Machines. Cambridge, MA: Harvard University Press.

·

Keller, P.J., A.D. Schmidt, J. Wittbrodt, E.H.K. Stelzer (2008) “Reconstruction of Zebrafish Early Embryonic Development by Scanned Light Sheet Microscopy”, Science 322: 1065-1069.

·

Love, A.C. (2008) “Explaining the Ontogeny of Form: Philosophical Issues”, in A. Plutynski and S. Sarkar (eds). The Blackwell Companion to Philosophy of Biology. Malden, MA: Blackwell Publishers, pp. 223-247.

·

Robert, J.S. (2004) Embryology, Epigenesis, and Evolution: Taking Development Seriously. New York: Cambridge University Press.

·

Weber, M. (2005) Philosophy of Experimental Biology. New York: Cambridge University Press

 

SYMPOSIUM 6: CAUSATION AND EVIDENCE IN THE HISTORICAL SCIENCES

Derek Turner

Connecticut College

 

1. Common Cause Explanation and the Asymmetry of Overdetermination

Carol Cleland

University of Colorado-Boulder

In earlier work I argue that the methods of prototypical historical science differ from those of classical experimental science, and that these differences in practice are underwritten by a pervasive physical property of the universe, a time asymmetry of causation (David Lewis’s “asymmetry of overdetermination”). More specifically, local events are causally ordered in time in such a way that later events usually overdetermine earlier events and earlier events usually underdetermine later events. The overdetermination of the localized past by the localized present explains, for instance, why geologists can confidently infer the occurrence of long past events such as a massive, caldera forming, eruption that occurred 2.1 mya in what is now Yellowstone National Park. The underdetermination of the localized future by the localized present explains why it is so much more difficult to infer the occurrence of near future events such as the next eruption of Mt. Vesuvius. In this talk I discuss the extensive use of common cause explanation in prototypical historical natural science. I argue that the asymmetry of overdetermination provides the needed justification for the principle of the common cause. According to the first half of the asymmetry of overdetermination, the present is filled with epistemically overdetermining traces of past events. Hence it is likely (but not certain) that a puzzling association (correlation and/or similarity) among present-day phenomena is due to a last common cause. The quest for a “smoking gun,” which lies at the heart of prototypical historical science, is a search for additional evidential traces for distinguishing which of several rival, common cause hypotheses provide the best explanation for the available body of traces. The overdetermination of the past by the localized present, a physical fact about our universe, ensures that such traces are likely to exist if the traces in the original collection share a last common cause. For insofar as past events typically leave numerous and diverse effects, only a small fraction of which is required to identify them, the contemporary environment is likely to contain many, as yet undiscovered, potential smoking guns for discriminating among rival common cause hypotheses. Indeed, a search for a common cause may yield evidence that an association among traces that initially appears to be the result of a common cause is actually the result of separate common causes; even in separate cause explanations the focus is usually on common causes. I also briefly discuss the threat of “information destroying processes,” arguing that there is reason to believe that it is not as insurmountable as sometimes maintained.

 

2. Causal Explanations of Megafaunal Extinction

Kevin Francis

Evergreen State College

In Wonderful Life, Stephen Jay Gould argued that historical sciences and experimental-predictive sciences have distinctive aims and methods. In the historical sciences, scientists attempt to explain the cause of singular events not “by reducing them to simple consequences of natural law” but rather by enmeshing them in narratives that capture “a realm of contingent detail.” Moreover, “verification by repetition does not arise because we are trying to account for uniqueness of detail that cannot, both by laws of probability and time’s arrow of irreversibility, occur together again.” This paper explores Gould’s account by examining scientific efforts to understand an especially stubborn problem in the historical sciences: what caused the extinction of ice age mammals?

Between 50,000 and 10,000 years ago, major megafaunal extinctions took place on every continent except Africa. More than 40 genera of vertebrates, including mammoths, mastodons, and giant ground sloths, disappeared from North America around 13,000 years ago. By the 1960s, with the advent of radiocarbon dating, most scientists agreed on the magnitude and timing of megafaunal extinction. In contrast, for the past 50 years scientists have disagreed about the cause of these extinctions. They have proposed diverse causal explanations for this mass extinction, including a variety of climatic changes, human hunting or anthropogenic disturbance, infectious disease, and exploding asteroids. As the number of publications on this mass extinction continues to grow, the overall trend has been toward less, rather than greater, consensus on its cause or causes.

The longevity and intractability of this problem produced many explicit scientific discussions about causation, explanation, and evidence among practitioners of archaeology, paleontology, and paleoecology. This paper explicates the way that scientists themselves viewed their aims and methods, and in doing so challenges some of the claims made by Gould (and others) about the unique methodology of historical sciences. Paul Martin and the so-called “overkill hypothesis” can illustrate two of these challenges. First, Martin claimed that, like his colleagues in the experimental sciences, he presented a falsifiable extinction model. Although some of his critics eventually challenged this claim of falsifiability, they did not challenge the principle of falsifiability in historical models. Second, Martin attempted to explain the cause of individual extinction events (e.g. mammoth extinction in North America), not by establishing a narrative of contingent detail, but rather by establishing these individual events as part of a broad class of events (all continental and island megafaunal extinctions) with a putative cause-effect regularity in every case (human hunting-extinctions). In this case, the debate focused on the particular assignments of individual events to broad classes, but not on the basic strategy of explaining causation through membership in a class that exhibited cause-effect regularities. In these cases, scientific practitioners behaved in a way that is not adequately captured by certain philosophical accounts of the historical sciences.

 

3. Historical Trends and Philosophical Theories of Causation

Derek Turner

Connecticut College

Scientists working in fields from paleobiology and evolutionary biology to climate science and economics often take themselves to be studying the causes and effects of historical trends. A trend is a persistent directional change in some variable. Examples of historical trends include global warming, evolutionary size increase (Cope’s rule), directional changes in gene frequencies in evolving populations, grade inflation, and falling housing prices.

Over the last few decades, philosophers of science have elaborated and defended a variety of different theories of causation—regularity theories, counterfactual theories, property transfer theories, and others. Most philosophers have thought of causation as a relation that holds between (types of) objects or events, or perhaps facts or states of affairs. But historical trends do not fit neatly into any of these metaphysical categories. If those theories cannot accommodate the idea that historical trends have causes and effects, then they do a poor job of making intelligible an important feature of current practice in historical science. This is especially problematic since some of the scientific work in question has relevance to public policy.

This paper focuses on one of the leading current theories of causation: the interventionist theory that Jim Woodward defends in his book, Making Things Happen (Oxford University Press, 2003). Because Woodward treats causation as a relationship that holds between two variables, his theory seems amenable to the thought that a trend (a persistent directional change in one variable) can have causes and effects. In his view, roughly, the claim that x is a cause of y means that if we could manipulate the value of x while holding fixed all the other variables that might influence y, then the value of y will change too. But Woodward’s account faces two problems when it is applied to historical trends: (1) Woodward says nothing about persistence or directionality. Can his account really capture what is going on when scientists make claims about the causes of persistent, directional changes in some variable? (2) Woodward writes that “values of variables are always possessed by or instantiated in particular individuals or units, as when a particular table has a mass of 10kg” (2003, p. 39). However, many of the measures that concern scientists who work on trends are averages (e.g., mean body mass of mammals, or the mean geographical center of population of the U.S.) or proportions (e.g., the relative frequency of an allele in a population), or properties of populations (e.g. population size, or the number of species in a clade). Can Woodward’s account capture what is going on when scientists make claims about the causes of changes in these sorts of aggregate measures?

My project is to assess the prospects of extending Woodward’s account so that it addresses these two issues, and thus makes intelligible one important feature of current practice in the historical sciences.

 

SYMPOSIUM 7: THE EPISTEMIC ROLES OF ORGANISMS IN BIOLOGICAL PRACTICE

Sabina Leonelli

University of Exeter

Today as in the past, biological research has been centred on the study of living organisms. Yet, the ways in which organisms enter and shape biological research can vary dramatically depending on the practices in question. Organisms can be objects for collection, classification and display, as in the natural history tradition. They can be brought into a lab and experimented upon, a practice that often results in their standardisation into ‘tractable organisms’ and in the production of data to be used as evidence for claims about their biology. And they can disappear as material objects from ‘dry’ biological research, coming back however in the form of assumptions about how available data should be organised and interpreted to produce new knowledge.

Little philosophical effort has hitherto gone into understanding how different – or similar - the role of organisms can be within diverse research practices. By gathering and confronting three different cases ranging from 18th century taxonomy to 20th century bioinformatics, this session aims to fill this gap, in the hope of providing a platform for a more systematic understanding of the epistemic role of organisms in the life sciences.

 

1. Exemplars, Records, Tools: Organisms in Botanical Research, c. 1750-1850

Staffan Mueller-Wille

University of Exeter

In botany, garden and herbarium specimens have been used for purposes of systematic research since the mid-sixteenth century. The associated practices of collecting, exchanging and collating specimens were most influentially synthesized by Carl Linnaeus in the mid-eighteenth, although it should take roughly a century after Linnaeus’s death until they were formally canonized in international rules of nomenclature. The role of specimens and type-specimens in the history of natural history – a “metaphysics in action”, as Lorraine Daston calls it – has been discussed in a number of historical and philosophical studies in recent years. What has largely been overlooked, however, is the fact, that alongside the rise of the type specimen method, plants began to acquire another role in botanical research. In hybridization experiments, plants were increasingly used as tools to manipulate other plants, and the offspring resulting from these interventions as a kind of recording device to score the effects of hybridization. In my presentation I will look at select hybridisation experiments of the period to unravel the intricate relationship between natural historical and physiological concerns which governed this experimental practice.

 

2. The Mexican axolotl’s long history as an experimental animal

Christian Reiss

Max-Planck-Institute for the History of Science, Berlin

In the historiography of laboratory animals, the animals themselves seem to become historicized only in the context of their integration into the experimental systems of the life sciences. The reasons seem to be manifold. First, most studies are very detailed micro-histories with a bias towards 20th century biomedical sciences. Second, the longer history of an animal often seems to be rather difficult to tell, as they are either very vast (e.g. the rat or the frog) or very small (e.g. Drosophila). Third, they are maybe considered to be irrelevant, as the boundary between history and natural history seems be become blurred. What at first seems to be merely a historiographical problem, has, as I will argue, philosophical implications as well.

In my talk, I want to present the example of the Mexican axolotl as a long durée study of the history of an experimental animal with a special focus on the various epistemic practices the animal was part of. The axolotl was first described in the context of the exploration of the Americas in the 16th and 17th century. Research was mainly conducted in the field, i.e. in Mexico and was focused on the description and classification of the animal as part of the New World fauna, extracting and translating the axolotl from its indigenous heritage and environment and translating it into European knowledge. After Alexander von Humboldt sent preserved specimens to George Cuvier at the Natural History Museum in Paris, research went indoors and the animal was finally mobilised for Western science. In the 1860s live specimens were brought to Paris in the course of France’s colonial activities in Mexico and opened up the possibility of studying a living exotic animal in the closed environment of the museum/laboratory. The animal also became a popular pet in the uprising European aquarium craze, which was especially crucial for the establishment of an axolotl infrastructure, providing both animals and breeding know-how. While at first research in those closed environments concentrated on questions of evolution, the axolotl developed into an important experimental animal for embryological studies in the 1920s and 1930s.

This history had two consequences for the animal and the research conducted on it. First, the animal was integrated into different research traditions (taxonomy, evolutionary biology, developmental biology) that existed in parallel with, depending on the period, more or less exchange between each other. Second, the history, with its different classifications and interpretations of the axolotl, was inscribed into the animal and shaped the future research on it.

Focusing on four selected examples from the axolotl’s history, I want to show the different mobilisation and translation practices and how they shaped animal and research interpedently.

 

3. Re-Using Data, Re-Thinking Organisms: The Epistemic Impact of Databases on Model Organism Biology

Sabina Leonelli

University of Exeter

Rachel A. Ankeny

University of Adelaide

This paper concerns how evidence becomes structured for use and re-use within model organism biology, and the impact of this structure on research and researchers. Our analysis focuses on how data gathered on model organisms are collected, ordered and retrieved in databases in order to be re-used across research communities in the biological and biomedical sciences. The use of databases certainly challenges the social structure of these research communities, by enhancing opportunities to share data and materials gathered on different organisms across disciplinary and geographical boundaries. What we wish to focus on here are the epistemic consequences of this shift in practices: how reliance on cyberinfrastructure affects the interpretation of data as evidence towards new discoveries within biology.

We argue that the processes of data annotation and data mining affect biology in at least four ways:

(1) by imposing a cluster of theoretical commitments, including assumptions about the significance of conservation, taxonomy, phylogeny and homology among species, which are then imported into the knowledge drawn from the use of data as evidence;

(2) by creating an implicit heuristics in the processes of deciding what information to include (and what to exclude) in databases to facilitate data re-use, as the choice of so-called ‘meta-data’ is dictated by assumptions about what kind of information is needed to evaluate the quality and reliability of data;

(3) by selectively linking the virtual world of data and the concrete materials (e.g., actual specimens and tissue cultures) from which data have been obtained, thus creating a group of ‘favourite samples’ that are assumed to be representative for all other biological materials, whether in the laboratory or in the wild;

(4) by determining the characteristics and interests of the users that the databases are supposed to serve, thus favouring an alternative community of users whose expectations and needs differ from the original model organism communities; as a result, data are re-used for different purposes than the ones previously characterising research on model organisms.

We conclude that the increasing reliance on databases as vehicles to circulate data is having a major epistemic impact on the ways in which data are used as evidence, and consequently on the ways in which researchers understand the biology of organisms.

 

PAPER ABSTRACTS

 

SOCIALIZING EPISTEMICS: RESOLVING THE OX-PHOS DEBATE

Douglas Allchin

University of Minnesota

Racker, Chance, and Slater; Ernster, Boyer,candlestick-maker?: so many biochemists at odds about energy in the cell in the 1960s. They ultimately agreed to accept chemiosmotic theory -- but all for different reasons. I will sketch the epistemic structure of their distributed, local responses, what this indicates about the epistemological foundation of scientific practice, and what strategies for science they indicate.

 

FROM SIMILARITY TO HOMOMORPHISM: TOWARD A PRAGMATIC ACCOUNT OF REPRESENTATION IN ART AND SCIENCE, 1880-1914.

Dr. Chiara Ambrosio

University College London

The years 1880-1914 were a time of intense experimentation in the visual arts. Represen-tative conventions became variable, and artists deliberately departed from a concept of depiction considered as physical resemblance or photographic similarity. Visual represen-tations progressed toward a conceptualization of figures and objects that transcended perceptual data, and the rendering of pictorial objects turned into an experiment involving complex visualization processes.

This paper explores the interplay between artistic and scientific representative practices between 1880 and 1914. I argue that science and technology acted as substantial challenges upon the concept of resemblance in art and that the rhythm of scientific and technological discoveries at the turn of the 20th century paralleled a shift from a notion of similarity to one of homomorphism in the conceptualization of pictorial representation. Homomorphism denotes representations which dispense with a point-to-point correspondence between depicted objects and perceptual data. I developed the concept from a scholarly study of Charles S. Peirce’s pragmatic account of representation, and in particular his theory of iconicity. Peirce defined iconic signs as “partaking in the character of the object” (CP 4.531), that is, as preserving the same relational structure as their object. A theory of iconicity as homomorphism successfully exemplifies representative relations based on structure preservation. Applied to 20th century representative practice, homomorphism offers a plausible explanation for representations in which a considerable conceptual effort is required, independently of a point-to-point correspondence between depicted objects and perceptual data.

Using four case studies - the photographer Alfred Stieglitz and the painters André Derain, Max Weber and Pablo Picasso - I argue that representative practice between 1880 and 1914 was strongly informed by experimental scientific practices and that the shift from figurative to conceptual representation in art was triggered by a more significant theoretical shift involving representation as a general philosophical notion. Rather than a normative quest for the necessary and sufficient conditions for representation, I will propose a pragmatic evaluation of the means and strategies through which artists and scientists devise perspicuous and useful representations of the world. My analysis of the correlations between artistic and scientific representations at the turn of the 20th century aims to fulfill a twofold purpose. From a historical viewpoint, it draws significant parallels between the experimental aspects of representative practices in art and science considered as ways of exploring natural phenomena and intervening upon them. From a philosophical viewpoint, my goal is to propose a novel epistemological framework to assess how the shift in the conceptualization of representations affected subsequent styles of knowing and experimental practices in art and science. Ultimately, by combining the relative merits of historical and philosophical accounts of representation, I will argue for the advantages and desirability of a philosophically informed history of representative practice.

 

MODELING COLLECTIVE BELIEF IN SCIENCE

Hanne Andersen

University of Aarhus, Denmark

In her seminal paper “Modeling collective belief”, Gilbert (1987) argues for a joint acceptance model of group belief. On this model, in order to form a group belief, the individual members of the group must openly express a conditional commitment jointly to accept some given claim. Gilbert bases her arguments on close analyses of a number of group belief situations drawn primarily from everyday settings, but also claims that her model can be used for analyses of science (Gilbert 2000). Drawing on Gilbert’s model, Wray (2007) and Rolin (2008) have argued that scientific research teams do have collective knowledge and that what the team jointly accepts can be determined from examining their published papers. Further, contrary to Wray, Rolin also claims that the scientific community of some specialty may jointly accept scientific claims and that the expression of joint accept can, for example, be made by explicit agreement in public to conference talks etc.

However, in this paper I shall argue that more detail and attention to case studies are needed to develop a plausible model of collective belief in science.

I shall argue that when modeling collective belief in science it is crucial to distinguish between different kinds of groups, both with respect to the character of the interaction within the group, ranging from a jointly publishing research group to the scientific community of an entire discipline, and with respect to the cognitive distance between group members, from narrow monodisciplinary to highly interdisciplinary groups. However, in both cases the various types of groups should be seen as lying on a continuum and not as clearly demarcated categories.

Based on case studies I shall analyze the various ways in which such groups can be said for form joint beliefs, and I shall discuss a) how the character of the interaction within scientific groups lead to different processes of joint accept of scientific claims, and b) how the cognitive distance between the group members influence the process through which joint accept can be acquired.

 

WHOSE KNOWLEDGE MATTERS: THE CONTEXT DISTINCTION: CONTROVERSIES OVER FEMINIST PHILOSOPHY OF SCIENCE

Monica Aufrecht

University of Washington

The “context of discovery” and “context of justification” distinction has been used by Noretta Koertge, Elizabeth Anderson, Richmond Campbell, and Lynn Hankinson Nelson in debates over the legitimacy of feminist approaches to philosophy of science. Koertge uses the context distinction to argue against the possibility of gender, race, class and other social factors being epistemically relevant to knowledge formation. She contends that social factors belong in only the “context of discovery,” where research questions are chosen and pursued. She argues that such factors should be excluded from the “context of justification,” in which evidence for scientific claims is evaluated, to ensure against bias and political distortion. Since the basic assumptions of feminist epistemology violate this context distinction, Koertge argues that the approach of feminist epistemology is misguided. Elisabeth Anderson and Lynn Hankinson Nelson, among others, defend feminist epistemology against these charges.

In this paper, I evaluate their defenses and show that in these debates the use of the context distinction is deeply ambiguous and so masks underlying disagreements about when and why philosophers should look to scientific practice and about the aims of philosophy of science more generally.Traditionally, distinctions have been used to dissolve puzzles by showing how the puzzles reduce to shared assumptions, or they have been used to open up a debate to allow for further possibilities. However, in this case, Koertge uses the context distinction to close down the conversation by barring certain approaches, thereby obscuring points of true disagreement about the nature of justification. Nonetheless, Koertge raises important questions that have been too quickly set aside by Anderson and Nelson. I argue that the use of the context distinction masks underlying debates about naturalism and the nature of justification.

These issues about what constitutes justification are not essentially feminist, nor do they necessarily turn on views of values and ideology, or Koertge’s worries about biased inquiry. Rather, they depend on determining what method we should use to develop an account of justification: Establish a priori meta-principles, or look in part to scientific practice? The distinction also masks underlying disagreement about the nature of justification: Will we find one universal account of justification (such as falsificationism), or will different episodes in science require unique accounts of how evidence supports a scientific claim? Examining these debates can be fruitful for feminist epistemologists; a disentangling of these ambiguities highlights important concerns that need to be met as those in science studies strive to map how social factors get legitimately incorporated into the evaluation of knowledge claims.

 

COMPUTER EXPERIMENTS IN HARMONIC ANALYSIS

Michael Barany

Cambridge University

It is conventionally understood that computers play a rather limited role in theoretical mathematics. While computation is indispensable in applied mathematics and the theory of computing and algorithms is rich and thriving, one does not, even today, expect to find computers in theoretical mathematics settings beyond the theory of computing. Where computers are used, by those studying combinatorics , algebra, number theory, or dynamical systems, the computer most often assumes the role of an automated and speedy theoretician, performing manipulations and checking cases in a way assumed to be possible for human theoreticians, if only they had the time, the memory, and the precision. Automated proofs have become standard tools in mathematical logic, and it is often expected that proofs be published in a computer-checkable format.

It is not surprising, then, that most philosophical work on computers in theoretical mathematics has been on computers' roles as supplementary mathematicians. Donald MacKenzie's 2001 book Mechanizing Proof demonstrates the rich potential for social and historical studies to complement the substantial analytic debate in this area of philosophy. But what of computers in theoretical mathematics behaving as computers, and not as mere mechanized mathematicians? Very little role is commonly assumed for computers working as supplements to mathematicians, rather than as supplementary mathematicians themselves. Accordingly, very little philosophy has attempted to grapple with theoretical mathematics in which computers play an essential but essentially non-theoretical role.

My presentation will draw on work I conducted as a researcher in harmonic analysis on fractals at Cornell University. I will analyze the explicit and implicit conceptual apparatus employed in my and my fellow researchers' use of computers in the theoretical study of second order differential equations, such as those for sound and heat flow, on various fractal analogues of the Sierpinski gasket. Such gaskets are easy to visualize in very crude approximation in a low number of dimensions. As one increases the complexity of the gasket or the refinement of one's analysis, visualization and precise computation become impossible, and soon computers are unable to produce even approximate data to model differential equations in these situations. We thus had to carefully choose analytic approaches and methods to make our theoretical mathematics amenable to computer simulation.

In my case, studying the transformation of the gaskets as they are expanded into increasingly high dimensions, computer simulation eventually required that the problem be reimagined entirely in terms of interlinked systems of parameters. This computer-approximation-driven theoretical orientation shaped my mathematical intuitions toward the problem and guided my fellow researchers and me in both theoretical and computational directions. We discovered both that computer approximation could be incredibly powerful as an aid to intuition, and that it can be incredibly difficult to transfer computer-oriented mathematics back into the purely theoretical standards of our area of specialty.

I will address the philosophical implications of computer-driven theoretical mathematics, asking how computer experiments can shape both the content and standards of theoretical sciences.

 

TRANSIENT UNDERDETERMINATION AND VALUES IN SCIENCE

Justin Biddle

Bielefeld University

The most common argument against the ideal of value neutrality in science – or the ideal that contextual values, such as moral and political values, be excluded from the epistemic evaluation of scientific research – is an argument from the underdetermination of theories by logic and evidence. According to this argument, there is a gap between logic and evidence, on the one hand, and theories, on the other, and this gap is inevitably filled by contextual factors. The debate over the ideal of value neutrality typically focuses upon the plausibility of a strong version of the underdetermination thesis, such as the thesis that all theories are underdetermined by all possible evidence. The possibility that a weaker version of the thesis – e.g., the thesis that many theories are transiently underdetermined by the available evidence – undermines the ideal of value neutrality is generally ignored. In this paper, I argue that it is a mistake to ignore this possibility; transient underdetermination provides strong grounds for rejecting the ideal of value neutrality.

To do this, I develop a preliminary argument for the claim that transient underdetermination undermines the ideal of value neutrality; I consider what I take to be the most plausible objection to this argument, and I then argue that this objection fails. The preliminary argument from transient underdetermination against the ideal of value neutrality proceeds as follows. There are many situations in which current, cutting-edge research is transiently underdetermined and in which decisions regarding hypothesis-choice must be made. In such situations, we do not have the luxury to wait until all of the evidence is in and instead must make decisions in the face of underdetermination. In these situations, evidential considerations do not determine hypothesis-choice, and yet a choice must be made; therefore contextual factors will play an inevitable role. Given this, the ideal of value neutrality is seen to be unattainable and thus should be rejected.

The standard response to this argument is one originally formulated by Richard Jeffrey. Jeffrey argues that the proper task of the scientist is not to accept or reject hypotheses, but rather to assign probabilities to hypotheses, which can be done in a value-neutral fashion. Once this is done, the hypotheses and accompanying probabilities are then handed over to policy makers, who then decide how best to act. According to this line of reasoning, while contextual factors such as ethical values can and should be brought to bear on the choice of which hypotheses will guide our actions (e.g., our public policies), such factors should be excluded from the epistemic appraisal of research. I argue that the Jeffrey objection fails, because it is not feasible to assign probabilities to transiently underdetermined hypotheses in a value-neutral fashion.

 

COGNITIVE SUBTRACTION TECHNIQUES AND NEUROIMAGING RESEARCH

Robyn Bluhm

Old Dominion University

Prior to the development of functional neuroimaging techniques such as positron emission tomography (PET) and functional magnetic resonance imaging (fMRI), the field of cognitive psychology developed largely independently of that of neuroscience. The ability to image the brain activity associated with the performance of a cognitive task has led to the development of a new field of cognitive neuroscience. Neuroimaging studies, however, depend crucially on theories in cognitive psychology, so that the brain activity associated with a cognitive task of interest can be distinguished from other neural activity. Yet there is still no clear account of the relationship between theories of neural functioning (as tested in neuroimaging studies) and the theories of cognitive function developed by cognitive psychologists and there is still much debate among researchers about the best methods for integrating cognitive psychology and neuroimaging. Cognitive psychology has traditionally relied on a “subtractive” method in which the details of a cognitive task are varied so that performance on the experimental task (which includes the specific cognitive process of interest) is compared with that on a control task (identical except for that specific process). The time taken to complete the control task is then subtracted from that on the experimental task. Sophisticated cognitive tasks can be constructed using these methods that allow psychologists to develop detailed models of cognitive processes. In the context of functional neuroimaging, the application of these methods requires the assumption that there is some kind of systematic relationship between psychological and neural processes, but it is extremely unlikely that there is a clear isomorphism between the two types of process.

In this paper, I examine the scope and limits of subtractive methods in cognitive neuroimaging. When PET imaging was initially developed, both the theories and the methods of cognitive psychology were simply imported into brain imaging protocols, so that neural activity associated with experimental and control tasks were subtracted in the same way as reaction times were subtracted in cognitive paradigms. Because of the lack of isomorphism between psychological and neural processes, this approach to brain imaging provides only a limited understanding of how the brain implements cognitive processing. More recently, fMRI researchers have capitalized on some important technical differences between fMRI and PET to develop new methods for examining brain activity over time. Briefly, PET allows the acquisition of a single image during a functional scan, whereas fMRI permits the acquisition of multiple images during the same amount of time. Because of this, fMRI scans can be used to provide information about not just the activity of different areas of the brain, but also the connectivity between them, and how that connectivity changes during the performance of a cognitive task. I argue that these new methods of analysis cannot be understood without recourse to the traditional subtractive method, but that they also build on this method in ways that allow the development of more sophisticated models of the relationship between psychological and neural processes.

 

HOW DO MODELS GIVE US KNOWLEDGE? MODELS AS EPISTEMIC TOOLS

Mieke Boon

University of Twente, The Netherlands

Tarja Knuuttila

University of Helsinki, Finland

How do models give us knowledge? Although there have been differing perspectives on models, the philosophers of science have still generally agreed that models give us knowledge because they represent their supposed external target objects more or less accurately, in relevant respects and sufficient degrees (Bailer-Jones 2003; da Costa and French 2000; French and Ladyman 1999; Frigg 2002; Morrison and Morgan 1999; Suárez 1999; Giere 2004). The fundamental dividing line goes between those accounts that take representation to be a two-place relation between two things, the model and its target system, and those that argue that also the representation-users and their purposes should be taken into account.

The conviction that representation can be accounted for by reverting solely to the properties of the model and its target system is part and parcel of the semantic approach to scientific modelling. According to this conception, models specify structures that are posited as possible representations of either the observable phenomena or, even more ambitiously, the underlying structures of the real target systems. The representational relationship between models and their target systems is analysed usually in terms of isomorphism (van Fraassen 1980, 45, 64; Suppe 1974, 97,92; French 2003; French and Ladyman 1999).

Pragmatic approaches point out in turn, that no thing is a representation of something else in and of itself; it has to be always used by the scientists to represent some other thing (Teller 2001, Giere 2004). However, if we accept the pragmatist minimalist approach to representation, not much is established in claiming that models give us knowledge because they represent their target objects. In fact, we will argue that the pragmatist account just points to the impossibility of giving a general substantial analysis of representation that would explain in virtue of what knowledge, or information, concerning real target systems could be retrieved from the model.

As our concern is in explaining how and why models give us useful knowledge, we will approach models from a functional point of view, as epistemic tools. This amounts to considering modelling as a specific scientific practice which makes use of concrete representational means for specific purposes such as scientific reasoning, theory construction and design of other artefacts and instruments. The conception of models as epistemic tools is contrasted with the traditional view of models which assumes that models are representations of some target systems. From this perspective a scientific model is a constructed entity, which gives a theoretical interpretation of a target system in view of particular epistemic purposes. The turn to modelling thus actually implies an extended notion of a model: models can be regarded as unfolding entities constructed by scientists with various representational means, to which the epistemic purposes and various other ingredients are built in. With an example of the Carnot model of a heat engine we aim to show that a model reduces neither to a diagram nor to a theory or an imaginary entity, but consists of diverse aspects that scientists have built into it in the process of modelling. We claim that this intricate content of scientific models, which usually is fully understood only by the scientists working in the field in question, makes models to function as epistemic tools.

 

AMENDING AND DEFENDING CRITICAL CONTEXTUAL EMPIRICISM: LESSONS FROM MEDICAL RESEARCH

Kirstin Borgerson

Dalhousie University

In Science as Social Knowledge (1990) and The Fate of Knowledge (2002), Helen Longino develops a social epistemological theory known as Critical Contextual Empiricism (CCE). While Longino’s work has been generally well-received, there have been a number of criticisms of CCE raised in the philosophical literature in recent years. In this paper I outline the key elements of Longino’s theory and propose several modifications to the four norms offered by the account. The revisions I propose are shaped by a number of developments in the medical context in recent years. The modified norms, which determined whether a particular community produces objective knowledge, are thus:

 

Avenues for Criticism – there must be recognized avenues for criticism, and these avenues must be publicly accessible and require transparent disclosureof all relevant information (including competing interests) from those who present their ideas. It must also be a community requirement that all members present their ideas for critical scrutiny if they wish them to be recognized as knowledge.

 

Responsiveness to Criticism – the community must be responsive to criticism.

 

Shared Public Standards – there must be some shared standards that determine community membership. Outsiders to a particular community are welcome to engage in critical debates as long as they share at least one of the community standards with the target community.

 

Cultivation of Diverse Perspectives – communities must cultivate diverse perspectives, that is, the perspectives of those who express strong dissent.

The version of CCE I defend gives the principle of diversity a more central role than the original and provides greater specification of two of the other norms in light of challenges faced by medical researchers in recent years. The medical context provides us with a number of cautionary tales in which knowledge production that appears to meet the original four norms has been seriously compromised by particular social interests. The proposed modifications attempt to address these ‘loopholes’ in a way that is not ad hoc. I argue that the modifications I suggest are in line with the underlying assumptions and goals of CCE.

The modified version of CCE also offers resources for defending CCE against the criticisms leveled against it by Miriam Solomon & Alan Richardson, Alvin Goldman and Philip Kitcher as well as one general concern arising out of a recent work by David Michaels. I provide responses to these criticisms in the final section of the paper. Throughout the paper I connect the theoretical work done in social epistemology to the real practice of knowledge-production as is occurs in the medical context. In light of the variety of social pressures influencing contemporary scientific research, and the role of science in shaping public policy, I argue that a rigorous social epistemology such as CCE is indispensable for understanding and assessing contemporary scientific practice.

 

COEXISTENCE OF SEVERAL INTERPRETATIONS OF QUANTUM MECHANICS AND THE FRUITFULNESS OF SCIENTIFIC WORKS

Thomas Boyer, University Paris 1

A general problem of philosophy of science is discussed, namely the long-lasting coexistence of several interpretations of a mathematized theory, and the case of quantum mechanics is taken as an example (another possible study would be electromagnetism, which long developed with different kind of interactions). A discomfort is often associated with such a situation: shouldn't the interpretation be consensual? A proliferation of competitors is usually seen as a feature of

crisis. However, I intend to show that it is possible to account for this situation as non-pathological, focusing on the compatibility of the plurality of interpretations with scientists' works.

A mathematized physical theory usually calls for an interpretation, whose goal is to give an image of the world compatible with the mathematical formalism. In quantum theory, several interpretations have been proposed, without any clear consensus. These interpretations appeal to different (but mathematically equivalent) formalisms. Together with the correspondence rules to experimental measurements, these formalisms compose what I shall call a single “predictive core”. The various interpretations are considered as predictively equivalent.

This can suggest why the coexistence of these interpretations, which has last for several decades, does not really seem to have disrupted the research in quantum physics. Thus, a plurality of interpretations might be a normal feature of scientific theories, despite some usual unpleasantness about it. Let me specify to what extent it can be acceptable or not. Considering that scientific activity aims at improving the knowledge of the field – seen as the result of researchers' works : it comprises the theory, its empirical success and agent's know-how – I shall defend the view that a theory is in a normal situation when it allows scientists to best achieve this epistemic progress (cf. Kitcher, 1993). My task now is to explicit the conditions it requires and to show that they are satisfied during a coexistence of interpretations.

My proposal is to introduce the concept of “fruitfulness of works” as central to the practical ability of progress, by which I mean that scientists will contribute all the better if they can take advantage of the result of their colleagues' works. Works are fruitful when they can be used more or less straightforwardly for another research. I show in the paper that scientists can benefit from this fruitfulness, even if they do not all share the same interpretation of the theory. Indeed, what is really needed to reuse a scientific paper? For theoretical works (i. e. computation or developments of laws or models), the condition of fruitfulness is the identity or equivalence of formalisms; for experimental predictions, it is the uniqueness of correspondence rules. In fact, either the interpretation is absent from the published works, or it can be easily translated into another. So, what matters for fruitfulness is the uniqueness not of the interpretation but only of the predictive core; this is the case in quantum mechanics.

Insofar as it doesn't hamper the progress of the field, a coexistence of interpretations should be considered as normal, and it must be clear that the embarrassment with it stems for other considerations. In a way, this re-legitimates the various quantum interpretations and their own value (cf. Kellert, Longino, Waters, eds, 2006). More generally, I suggest that the limit to pluralism in science is a condition of fruitfulness of works, as far as one considers this practical requirement of progress of knowledge.

 

ON THE VERY IDEA OF A STYLE OF REASONING

Alexandra Bradner

Denison University

Although Ian Hacking’s meta-concept is frequently applied to historical cases, few theorists have questioned the very idea of a style of reasoning. Hacking himself considers Donald Davidson’s conceptual scheme argument to be the most formidable challenge to the style idea, but I argue Hacking has set up a straw man in Davidson. Beyond Hacking’s own conclusion, that Davidson's narrow concern with meaning incommensurability does not apply to styles, which are not incommensurable in that way, there is the more obvious point that styles, which do not organize or fit the world, are not the kind of schemes with which Davidson is concerned. In fact, Hacking agrees with Davidson, in that both propose we can argue over a topic only when we employ the same style of reasoning (a suggestion which I contend is not necessarily the case). Hacking has a more serious problem, in that he cannot remain a Kantian without justifying his style idea with a transcendental argument. But this kind of argument is only available to those who support a univocal notion of reason, which the very idea of a style seems to outlaw. He has overlooked the challenges which arise out of Arthur Fine’s NOA. Hacking’s attention to historical detail and unwillingness to employ transcendental arguments in support of his view renders him immune to Fine’s arguments against inference to the best explanation. But the list of the necessary and sufficient criteria which identify styles of reasoning cannot prevent the proliferation of (bogus) styles. Moreover, Fine’s call for openness in inquiry shifts the burden of proof against Hacking and calls for him to prove: 1) that we cannot understand the history of science without the style meta-concept and 2) that whenever we encounter a mystery, our first order of business should be to stand back and uncover the style of reasoning which makes the predicament possible, instead of getting directly to the business of solving the problem. According to Fine, some puzzles just happen. And there is no guarantee that uncovering the style, which identifies the topic as a topic, will have anything to do with solving the mystery. As I will show, not only is there no such guarantee, but Hacking’s position, if it is to sidestep Davidson’s critique, actually requires that styles play only the minimal role of identifying topics. Styles must be fairly trivial. In fact, as Philip Kitcher’s recent work on evolutionary psychology has suggested, there are instances in which relying upon the style to do anything but identify topics can be premature at best and socially explosive at worst. Fine thinks it is deeply unnatural to view science as entertainment. Philosophers are not to shine their lights on the action and watch it unfold. Rather, we are all to join the performance, thereby eliminating the artificial boundary between actor and audience which we mistakenly call upon to reinforce the notion that the raw material of science constrains our reflections about it.

 

KNOWLEDGE, VALUES, AND EPISTEMIC AUTHORITY IN LAND MANAGEMENT

Evelyn Brister

Environmental philosophers have argued that a consequence of the ideology of value-free science is that US environmental agencies and departments, such as the EPA, employ a linear model of policy-making (Norton 2005). In this model, scientists contribute their expertise to the decision-making process prior to and independently of value-based assessments. The unfortunate result is that scientific expertise is less relevant to decision-making than it might otherwise be.

We might expect that in contexts where environmental decision-making is more local, the ideology of value-free science does not interfere with policy-making because local land managers play both roles—of scientific expert and of political mediator. I demonstrate, however, that a consequence of separating knowledge production from normative judgment is that the democratic land management process grants public participants moral respect but not necessarily epistemic respect.

The USDA Forest Service prioritizes public participation in its management plan process. The management plans for public lands undergo frequent revision, and each revision solicits and responds to public comments. I examine the most recent management plan revision in the Finger Lakes National Forest. It is not unusual for the planning process to be fraught with miscommunication and mistrust, as was the case with this revision. However, the democratic process conceptualizes the source of conflict as being located in different visions of environmental goods. Land managers fulfilled the expectations and requirements of the process by granting participants respect as moral agents. They did not, however, grant participants epistemic authority. In general, the focus of public meetings is normative evaluation, not epistemc negotiation.

In this case, a key source of conflict was not a difference in values but rather a difference between what community participants claimed to know and what land managers claimed to know. Forest residents possessed situated knowledge in virtue of their experience living in the forest. But these participants lacked the scientific authority of land managers, and they were misunderstood as making scientific rather than everyday experiential claims. When judged as scientific, these claims were false. But once the residents gained the expertise to express their claim using an accepted GIS tool, their claim was understood as having merit.

I argue that participants’ lack of access to scientific expertise in the land management process is a form of epistemic injustice (Fricker 2007). Power-distorted practices of assigning epistemic authority privileged some kinds of knowledge over other kinds, undermining the ability of forest residents to engage in collaborative inquiry. In addition, the notion that science is and ought to be value-free contributes to land managers’ perception that their work in fact-based inquiry is separable from their work in recognizing and balancing the public’s expressions of values. Data sharing is a viable solution to the problem of epistemic access in this context; I also point out the suitability of GIS as a tool which supports public participation in knowledge production.

·

Fricker, Miranda. 2007. Epistemic Injustice. New York: Oxford University Press.

·

Norton, Bryan. 2005. Sustainability: A Philosophy of Adaptive Ecosystem Management. Chicago: University of Chicago Press.

 

PRAGMATIC TRUTH AND SCIENTIFIC PRACTICE

John Capps

Rochester Institute of Technology

While, historically, pragmatism was closely associated with scientific practice (both Peirce and James, after all, were practicing scientists) the connection had, until recently, grown tenuous. Philosophical pragmatists were more likely to focus on normative issues, such as ethical and political theory, while mainstream philosophers of science were more likely to pursue theoretical issues where pragmatic approaches seemed to shed little light. In addition, because pragmatism is a form of naturalism, and thus takes a broadly scientific approach itself, it wasn’t clear that pragmatism could shed any critical light on scientific practice.

Recently, however, pragmatic approaches have played a more prominent role in explaining scientific practice.(1) One particular role involves the notion of scientific truth, where the broadly pragmatic idea that truth is connected to long-term instrumental success seems to accord with actual scientific practice. After all, to claim that a scientific theory or statement is true seems to imply that it will stand up to further, indefinite, experimentation: roughly the pragmatic criterion of truth.

However, the exact contours of a pragmatic theory of truth are not themselves clear, and the theory remains open to a number of familiar criticisms. There is, first, a multitude of different “pragmatic” theories of truth: from the classical theories of Peirce, James, and Dewey to the modern versions of Hilary Putnam, Christopher Hookway, Crispin Wright, Cheryl Misak and others. In addition, there are the common concerns that pragmatic truth conflates what works with what is really true, and that it makes truth a remote concept unverifiable except at the speculative end of some inquiry (to use a Peircean formulation).

Enough background: in this paper I argue that not only can a pragmatic theory of truth shed light on scientific practice but that, just as importantly, scientific practice can shed light on a pragmatic theory of truth. First, a pragmatic theory of truth can better explain actual scientific practice than either a correspondence theory or a deflationary “minimal” theory of truth. This is because correspondence theories tend to address the question of what makes a theory true, a question which raises myriad problems (including issues of realism vs. antirealism), while deflationary theories fail to explain the value and motivations for pursuing true theories.

Second, however, attending to actual scientific practice can also help refine a pragmatic theory of truth. Otherwise, a pragmatic theory of truth risks making the truth unknowable by suggesting that, because we have not yet reached the hypothetical end of inquiry, we cannot know which of our scientific theories are true.(2) Rather, scientific practice shows that there is, often, no reason to withhold attributing knowledge, and thus truth, long before the “end of inquiry” is in sight. This is because of contextual features guiding scientific practice, the sorts of contextual features that bring inquiry to a close for all practical purposes. As a result, this insulates a pragmatic theory of truth from a common and powerful criticism.

To sum up, I argue that, first, a pragmatic theory of truth is especially well-suited for understanding scientific practice and, second, that paying attention to scientific practice can help a pragmatic theory of truth avoid many of the standard objections to this theory.

(1) See, for example, Newton C.A. da Costa and Steven French, Science and Partial Truth (Oxford, 2003) and the essays by Hacking, Fine, and Misak in Cheryl Misak, The New Pragmatists (Oxford, 2007).

(2) Cf. Cheryl Misak, Truth, Politics, Morality (Routledge, 2000).

 

HISTORICAL EXPERIMENTS, LOST KNOWLEDGE, AND THE PURPOSE OF SCIENCE EDUCATION

Hasok Chang

University College London

Why do we teach science to students who are not going to become professional scientists? I will argue that the primary purpose of general science education should be teaching the habit of active inquiry, rather than transmitting canonical knowledge. That is perhaps a common point of view among progressive educationists. The novelty I want to contribute is that such purpose can be served by the reproduction of experiments from the history of science.

My proposal focuses on neglected historical experiments. An open-minded study of history often leads to a recovery of valuable knowledge from past science that is forgotten, neglected or even suppressed by modern science. From my own recent work, experiments on the variations in the boiling point of pure water (under standard pressure) offer a wonderful example of such recovery of lost knowledge. These unruly variations, which depend on the material of the vessel employed, on the exact manner of heating, and on the amount of air dissolved in the water, were reported by a large number of 18th- and 19th-century scientists including De Luc and Gay-Lussac. These anomalous-sounding results were quite easily reproduced in my own experiments. (For further details see http://www.ucl.ac.uk/sts/chang/boiling/.)

Reproducing intriguing results from past science can also lead to further investigations, and this is shown even more clearly in another series of experiments, starting with a very simple one by Wollaston in 1801. Wollaston began with the familiar observation that certain metals dissolved in acids releasing bubbles of hydrogen. For instance, a zinc wire placed in hydrochloric acid (HCl) releases hydrogen. Add to this same pot of acid a copper wire, and no reaction happens on that side since HCl does not attack copper. Now if we make the two wires touch, hydrogen bubbles immediately start issuing from the copper as well the zinc. Trying to understand the experiment in modern terms reveals surprising challenges. Modern textbook accounts say that the H­+ ions in the acid take electrons from the zinc to turn themselves into neutral hydrogen gas, turning the zinc into Zn++ ions, which dissolve in the acid solution. But then why does the reaction generate any excess electrons that travel over to the copper side to make hydrogen gas there? Many intriguing variations can be made on this experiment, all of them quite challenging to explain in modern terms. It is also notable that the topology of Wollaston’s experiment is the same as that of the Voltaic cell: two different metals with an electrolyte between them. Knowledgeable historians are aware that there were serious and long-lasting disputes about the mechanism of the Voltaic cell throughout the 19th century, and my investigations show that even modern chemistry does not have a ready explanation of it.

The implications of such cases for science education are clear. The boiling-point experiments show that what we normally teach students about boiling (and phase change in general) is plainly false, or at least unreasonably oversimplified. The electrochemical experiments show that the humble Voltaic battery is actually a challenging theoretical problem worthy of serious debate. We may view these as awkward problems, cracks to be papered over as deftly as possible in the classroom. But we may also use them as rare opportunities to give students a genuine experience of open-ended scientific inquiry, using only very simple and cheap materials and apparatus. There are all sorts of variations in the phenomena to investigate, and various ways of theorizing about the results; exploring these phenomena and conceptions would not only let students learn something important about nature, but also about what it means to do original research.

This kind of reflective and active learning is of course possible in the realm of theory, too. However, on theoretical points most students or even teachers would find it difficult to challenge the received wisdom. With experiments it is easier to think independently, as the direct interaction with nature would provide students with the confidence and resources to pursue their own lines of inquiry.

 

INVERTING THE PYRAMID: A REASSESSMENT OF THE ROLES OF EXPERIMENT IN EVIDENCE-BASED MEDICINE

Brendan Clarke

University College London

There are broadly two types of experimental practice in common use in medicine. One is clinical, based on investigation of intact humans. The most common example of clinical experiment is the randomised control trial. The other main type of experiment is laboratory based. This usually involves the investigation of entities as either potential causes of disease, as parts of pathogenic mechanisms or as candidates for therapeutic interventions.

The evidence-based medicine (EBM) movement aims to develop high-quality, effective healthcare by basing medical practice upon scientifically derived information. So for instance, when we select medication, we should do so on the basis of relevant trial data,

rather than on the basis of expert opinion. Many practitioners of EBM suggest that we should take clinical evidence much more seriously than other forms of experimental evidence. In fact, evidence arising from laboratory investigation is now very often regarded as mere background knowledge, on a par with expert opinion. This goes against the intentions of the originators of the modern EBM movement [e.g. Guyatt et al, 1992], who viewed the interpretation of the results of laboratory investigation as a necessary prerequisite for effective clinical experimentation.

I argue that this neglect of laboratory investigation by practitioners of EBM is a mistake; useful clinical experiment critically depends on strong foundations in basic science. Without this background, it is very difficult to exclude the actions of either systematic error or confounding in producing erroneous effects in clinical trials, with the result that either ineffective treatments may be strongly supported by 'best' evidence or that actually effective treatments are neglected for wont of it.

So I suggest a philosophically motivated return to the careful consideration of laboratory evidence as integral part of EBM practice. When appraising clinical evidence, we should do so in the light of the relevant best laboratory evidence. This does not mean that we should replace clinical evidence with basic science. Instead, both are necessary parts of critical appraisal of evidence. I go on to argue that we should consider this interplay between these classes of evidence in a causal fashion. While EBM is not explicitly concerned with providing a causal interpretation of evidence–rather, it is concerned with demonstrating applicable efficacy of an intervention—a causal interpretation may assist in excluding effects that appear to be due to systematic errors. My preferred causal interpretation—that espoused by Russo and Williamson [2007]—is itself pluralist with respect to evidence, considering both mechanistic and statistical data in reaching causal decisions. It is also monistic with respect to causation, in keeping with common medical practice. I thus go on to suggest possible ways of applying the Russo-Williamson thesis as a practical causal tool for interpreting a range of medical evidence.

·

Guyatt, G., et al. 1992. "Evidence-based Medicine. A New Approach to Teaching the Practice of Medicine," Journal of the American Medical Association. 268(17): 2420—5.

·

Russo, F. and Williamson, J. 2007. "Interpreting Causality in the Health Sciences," International Studies in the Philosophy of Science. 21(2): 157—170.

 

EVIDENCE FOR USE: THE ROLE OF CASE STUDIES IN POLITICAL SCIENCE RESEARCH

Sharon Crasnow

Riverside Community College – Norco

In the last 15-20 years political science research has been increasingly dominated by quantitative methods. The success of these methods in economics and investment in data collection in a number of subfields—from international relations and political economy to voting studies and the analysis of public opinion – have reinforced this trend. As this shift has taken place, qualitative research has not vanished. But supporters of quantitative research have argued that qualitative analysis should be confined to the context of discovery: case studies can at best inform the design of more rigorous statistical work. More recently it has been suggested that case studies also serve a role in the context of justification. They can be used to test the robustness of the empirical regularities derived from quantitative research.

A variety of issues are raised, but among the more interesting is the question of validity of purposive selection of cases vs. random sampling. Fearon and Laitin (2008) have argued that the purposive selection of case studies by researchers will always be biased. Such bias threatens objectivity and thereby undermines the value of qualitative work. They propose a method of random sampling of cases and the use of such cases to test the results of statistical work. However, there may be issues with random sampling that undermine the contrast between these approaches. First, the random sample may in practice be a convenience sample and so the distinction between purposive sampling and “in practice” random sampling may collapse. There is a second, deeper worry. Nancy Cartwright (2007) has recently challenged the idea that random sampling provides the gold standard for evidence and argues that a multiplicity of methods is needed to support knowledge that can be put into practice.

I argue that the question of the appropriate method for case study selection cannot be fully understood without examining the role of evidence for use. Randomized selection of cases may yield cases in which we have no interest and therefore do not provide us with grounds for use of knowledge produced through statistical methods. For example, there is a wide-ranging literature on the determinants of the onset and settlement of civil wars. However, we are not interested in why Andorra does not have such conflicts, but why Iraq does and how we can resolve them. In order to make use of our knowledge, purposive selection of cases is necessary. However, the introduction of social values, pragmatic interests, and political concerns does not mean forsaking epistemic values. Rather non-epistemic values inform a weighting of various epistemic values. By rethinking the idea of randomized selection as a gold standard and re-examining the distinction between epistemic and non-epistemic values, the debate about case selection can be reframed so that the role of case studies as a means of providing evidence for knowledge that is both empirically and pragmatically adequate can be better understood.

 

SCIENTIFIC DISCOVERY IN AN ASYMMETRICAL LANDSCAPE: C.V RAMAN AND THE BUILDING OF A RESEARCH TRADITION IN COLONIAL INDIA

Deepanwita Dasgupta

University of Minnesota

This paper addresses the question how cognitive labor gets divided in science when two research communities— one metropolitan and the other a relative newcomer (thus unequal in power and privilege)—collaborate with each other over the development of a research program in science. When a new research community joins the network of scientific knowledge, it operates— at least, in the beginning— from a position of marginality. Thus, the only way it can contribute to the task of making scientific knowledge is by collaborating with another, and more privileged, metropolitan research community. How do the dynamics of scientific knowledge unfold when two scientific communities work with one another, but one of them holds more epistemic privileges than the other?

In Kitcher (2000)this scenario is briefly discussed, but not explored, but this is a scenario that we observe historically. During the late 19th and the early 20th century, several new national traditions of science were formed in countries like Japan, Korea and India. Thus, throughout the early 20th century, new communities of non-Western researchers joined the network of scientific knowledge— so far considered mostly a Western preserve. Naturally, such researchers developed complex relationships of collaboration and competition with science done at different Euro-American centers, and much of their work was done in a cognitive landscape that was asymmetrical in matters of intellectual authority. How was scientific knowledge made outside ofits standard Western locations by those smaller and peripheral research communities in collaboration with their dominant metropolitan counterparts? What positions did such peripheral researchers enjoy in the network of scientific knowledge? Did their work only consist in filling up the gaps of the metropolitan research programs, or, did they also get to propose and develop new research programs of their own? If a controversy arose, how was trust and consensus handled?

To explore these questions, I use a case study from the early 20th century colonial India: C.V. Raman, an Indian physicist who in 1928 discovered a new type of scattering effect, now known as the Raman Effect. Raman’s discovery constitutes an episode when a research program in optics was developed by a peripheral researcher, who occupied very different levels of authority exploring of how Raman’s style of visual reasoning and his instrumentation led him to the discovery of his new Effect, I show how Raman’s problem choices and his results were shaped by his attempts to gain a foothold in Western science.

The goal of this paper is to understand the spread of Western science to non-Western contexts, to develop a conceptual framework adequate to capture such circulation of knowledge, and in what sense the work of researchers like Raman can be called the beginning of a new national tradition science. and privilege (in matters of scientific knowledge) than his metropolitan counterparts.

1 Kitcher, Philip. 2000. ‘Patterns of Scientific Controversies” in Scientific Controversies. Ed. by Peter Machamer, Marcello Pera and Aristides Baltas. New York, Oxford: Oxford University Press.

 

IMPROVING MEDICAL EXPLANATIONS: RETHINKING EXPLANATORY STRUCTURE AND AGENCY

Barry DeCoster

Worcester State College

How should clinicians explain illness to patients? Can only clinicians generate medical explanations? What counts as a good explanation for patients? For clinicians? Explanations of disease are important tools of clinical medicine and biomedical research. In this paper I argue that philosophical explanations of disease, as formulated in the philosophy of science, have been used wrongly in the clinical practice of explaining patients’ sickness.

I criticize current views of medical explanation as wrongly understanding both the proper structure of medical explanations, as well as their proper agents of explanation. Here, I focus on three problems raised by the study of medical explanations of disease.

First, Paul Thagard (How Scientists Explain Disease) and Kenneth Schaffner (Discovery and Explanation in Biology and Medicine) have argued medical explanations should be structured as scientific explanations. Philosophers of science have advocated primarily ontic models of disease explanation that focus on complex causal-mechanistic interactions, e.g., between environment, the body, genetics, and infectious agents. As such, successful medical explanations have been thought to be those that lay out clearly cause and effect regarding disease. Yet, as I argue, ontic approaches remain inadequate as a basis for clinical explanations given that such approaches to medical explanations often fail to meet patients’ explanatory needs. In addition, the practice of medicine and medical research is interestingly different in that complete causal explanations are often impossible to develop.

Second, these standard models of medical explanations have failed to account for the epistemic contributions of patients to clinical explanations. Thagard’s explanatory strategy, for example, recognizes only the epistemic authority of clinicians and erases patients’ explanatory roles. This schema disempowers patients by viewing them only as subjects of explanation, rather than acknowledge them as epistemically authoritative subjects capable of participating in generating medical explanations.

I develop this position by distinguishing between two types of medical explanations: ‘biomedical research explanations’ and ‘clinical explanations’. Research explanations are epistemic projects that clarify how disease works in the human body. Biomedical researchers generate research explanations from the laboratory and refine them over periods of time. As such, research explanations are what I call peer-peer explanations: clinician- or researcher-generated explanations to be used by other clinicians and researchers.

In contrast, clinical explanations are expert-layperson explanations; they are generated and used by non-peers. Typically, clinical explanations are for patients’ use, often as part of an informed decision-making process about patients’ health. Clinical explanations are interesting here due to their importance as both epistemic projects and moral/political projects. Rather than generated as part of a lab or research situations, clinical explanations are most frequently generated during the face-to-face communication between doctors and patients. Clinical explanations in particular should both reduce patients’ puzzlement about their sickness, and inform their health decision-making process.

Finally, standard theories currently make no requirements for patients’ uptake of clinical explanations. In other words, if patients fail to understand the explanation provided by a clinician, this failure to understand is assumed to be a limitation or defect of the patient, not the explanatory strategy.

 

THE NEED FOR A PRACTICAL CONCEPT OF DISEASE

Dr. Leen De Vreese

Ghent University

Given that the understanding of what a “disease” is, in important ways influences our practices towards what we conceive of as “diseased people”, philosophers have an important role to play in the analysis of the conceptualization of the notion “disease” and the establishing of a justified view on the matter that fits in and improves health research and practice.

In my talk, I will defend a pluralistic view on the concept of “disease” which relies on different kinds of disease causes. According to this view, physical and mental diseases cannot be clearly separated but should be situated on a same continuum of kinds of diseases, although both classes of disease might tend towards the opposite extremities of this continuum. Such a continuum approach opposes the mainstream in the current philosophical debate on the concept of “disease”. In this debate, the search for a single, monolithic definition of “disease” still stands on the foreground. Further, the concept of a “mental disease” is often interpreted as being categorically different from the notion of a “physical disease”. And lastly, the social constructivist approach to the concept of “disease” is in this debate often seen as one totally opposing the biological basis approach. This state of affairs is astonishing given the diversity of diseases and the different degrees of influence of sociocultural beliefs on disease conceptualization. In as far as philosophers aim at a descriptive view on what the notion of “disease” covers, they would better consider an account in the line of psychologist Nick Haslam’s account of mental disorders (Haslam 2002). He recognizes different kinds of mental disease causes as defining different kinds of mental diseases (“kinds of kinds”). Some kinds of mental diseases need a pragmatist account while others need a realist or even essentialist account, according to Haslam’s framework. The continuum approach that I will propose is based on this view of Haslam, but broadened to the concept “disease” in general in order to include physical diseases. I will argue that such an account stands much closer to medical practice, including psychiatry. And, what is equally important, such an account also furthers a more nuanced and more appropriate view on what it means to be “diseased”.

In the second part of my talk, I will go deeper into some of the problems that might result from an over-simplified view on the notion of “disease”, in order to clarify the need for a more nuanced view. I will briefly highlight problems such as medicalization and essentialistic thinking in health matters, the stigmatization of “diseased” people, ethical problems of e.g. genetic screening, and the building on too high expectations for scientific evidence. I will use ADHD in children as a central example to make these problems concrete, and to show how taking a more nuanced stance to what is and what is not a “disease” can make us look at, for example, ADHD from a more appropriate perspective. Such a perspective will not amount to denying ADHD as a “real disease”, nor to overemphasize its possible biological basis, but will enable one to appreciate ADHD as a disease of a certain kind within a range of different kinds of diseases.

Haslam, Nick (2002), “Kinds of Kinds: a conceptual taxonomy of pscyhiatric categories,” Philosophy, psychiatry & psychology, vol. 9, nr. 3, pp. 203-217.

 

EXPLANATIONS IN SOFTWARE ENGINEERING: THE PRAGMATIC POINT OF VIEW

Jan De Winter

Ghent University

Research on scientific explanation shows that there is not one kind of explanation that guarantees maximal explanatory power. Different kinds of explanation are legitimate (e.g., Pettit, 1996; Weber and Van Bouwel, 2002). A question that then arises, is whether one can randomly choose a kind of explanation without running the risk of choosing a deficient explanation-type (‘anything goes’). If not, one can wonder what is the nature of the factors that determine which explanation-type is best. Philosophical analyses indicate that the following factors can influence the appropriateness of an explanation-type: the information looked for (Pettit, 1996), the explanation-seeking question (e.g., De Langhe, forthcoming; Van Bouwel and Weber, 2008), and the function the explanation should serve (e.g., Weber, 1999; Weber and Vanderbeeken, 2005; Weber, Van Bouwel and Vanderbeeken, 2005).

In this paper, I construct a framework for explanatory practice in software engineering. It is assumed that explanations are answers to why-questions. These questions can have the following formats:

(P-contrast) Why does object a have property P, rather than property P’?

(P and P’ are mutually exclusive properties.)

(T-contrast) Why does object a have property P at time t1, but property P’ at time t2?

(O-contrast) Why does object a have property P, while object b has property P’?

(plain fact) Why does object a have property P?

Such questions are motivated by certain reasons or interests. I argue that several explanation-types are legitimate in software engineering, and that the appropriateness of an explanation-type depends on (a) the engineer’s interests, and (b) the format of the why-question he asks, with this format depending on his interests. The explanation-type that best serves the engineer’s interests, and that best fits the question he asks, has most explanatory power.

This point of view is clarified by considering examples of explanatory challenges that turn up while developing a computer program that allows the user to generate a schedule that distributes twenty-one games (each between two different teams) over as few days as possible, given that (a) there are two conferences, each containing three teams, (b) teams of the same conference have to play two times against each other, while teams of different conferences have to compete only once, and (c) a team can not play more than one game per day. Explanations that can help one to create such a computer program, are proposed, and the relevant explanation-types are spelled out. The resulting survey of different kinds of technological explanation is complementary to other proposals about the nature of technological explanations (e.g., see Kroes, 1998; De Ridder, 2007).

One of the main virtues of the paper is that it demonstrates that the plausibility of explanatory pluralism is not restricted to the human sciences. The idea that more than one explanation-type is legitimate in the human sciences, is widely accepted by philosophers of science (e.g., Førland, 2004; Marchionni, 2008; Van Bouwel & Weber, 2008). However, in my opinion, the explanatory pluralistic framework can be expanded to other contexts as well. In the paper, it is shown that the explanatory pluralistic framework can at least be expanded to software engineering.

 

UNCERTAINTY AND PUBLIC HEALTH RESEARCH ETHICS

Emily Evans

Georgetown University

Uncertainty is a necessary condition for the sound moral and scientific conduct of research involving human subjects. If the expert scientific communities, medical or otherwise, lacked uncertainty about the interventions under investigation, it would be unethical to knowingly subject individuals to inferior or harmful treatment. Moreover, if the relative merits of the interventions were previously established, as indicated by the lack of uncertainty within the relevant expert community, the results of the trial would be of little, if any, scientific value.

Despite the important role that uncertainty plays in the formulation and conduct of research involving human subjects, the concept has received inadequate treatment in the research ethics literature. To the limited extent that uncertainty is addressed, much of the emphasis remains on the ethical, not epistemic, aspects of uncertainty. What is left insufficiently examined is the nature and scope of the uncertainty.

Often, uncertainty is glossed as agnosticism, indifference, and conflict. Yet these terms represent distinct types of uncertainty, and the type that obtains impacts the epistemic and ethical justifications for a proposed trial. A failure to recognize and clearly articulate the parameters of uncertainty in particular situations compromises our ability to respond to important moral and scientific concerns in the development and conduct of research.

This problem is particularly pronounced in public health research that investigates problems resulting not only from uncertainties in scientific knowledge but from institutional failures, economic and social constraints, and lack of political will. Indeed, the role of research in public health is tied to its capacities to successfully navigate such complex questions. The way in which a study is designed and carried out, as well as ethically and scientifically justified, must reflect a robust and systematic characterization of the uncertainties present.

Using an example from a study examining the efficacy of using treated sewage sludge to remediate lead-contaminated soil, I explore and analyze the aforementioned deficiencies in understanding uncertainty among researchers and ethicists. The case study serves as a vehicle through which to illustrate (1) important considerations in characterizing and addressing uncertainty; (2) the influence of the uncertainty characterization on the formulation of parameters used to evaluate the ethical permissibility of the research; and (3) ways in which these parameters might be operationalized in the context of a particular study.

 

COLLABORATION, TOWARD AN INTEGRATIVE PHILOSOPHY OF SCIENTIFIC PRACTICE

Melinda B. Fagan

Rice University

Philosophical understanding of experimental scientific practice is impeded by disciplinary differences, notably that between philosophy and sociology of science. Severing the two limits the stock of philosophical case studies to narrowly circumscribed experimental episodes, centered on individual scientists or technologies. The complex relations between scientists and society that permeate experimental research are left unexamined. In consequence, experimental fields rich in social interactions (notably biomedicine) have received only patchy attention from philosophers of science. This paper sketches a remedy for both the symptom and its root cause. An empirical study of social interactions in an established field of biomedicine combines with philosophical study of the concept of collaboration, to yield an integrative account of successful experimental inquiry. Collaboration, here understood as participants working together on a common project toward a shared goal, is both examined and enacted, as the interactive social integument of experimental research is brought into focus. Socio-historical and philosophical approaches are used in concert to explicate the concept of scientific objectivity. Joint explication of this contested epistemic ideal demonstrates that philosophical and sociological approaches can work together toward a social epistemology of scientific practice.

The explication is in three stages. First, a minimal framework for investigating collaborative activities is established. Social action is understood and evaluated in terms of the connection between shared goals that participants hope to accomplish together, and the coordinated means by which they try to do so. This connection is explicated as participation, a relation mediating between a group and its members, which includes minimal constraints of instrumental rationality. Second, this framework is fleshed out via empirical study of scientific practices. The focal case examines the intersection of immunology and stem cell research in mid-20th century biomedicine, tracing the key social interactions within and among laboratory groups, as the field of blood stem cell research emerged in the 1960s and advanced throughout the next four decades. The study yields a robust empirical result. Participants consistently recognize two aspects of scientific success: construction of improved models of blood cell development, and formation of new boundaries among scientific groups. In the third and final stage, this result is generalized to other experimental episodes and shown to fit with recent accounts of models in scientific practice. The generalized result approximates a familiar normative view of scientific knowledge. An epistemic ideal of scientific objectivity in practice is then derived from this robust general result, using the minimal constraints on rational participation. The derivation is analogous to specification of ends in moral philosophy; given the means taken and assuming some hope of success, what must the goal of scientific inquiry be like? The aim of science so conceived corresponds to a classic conception of scientific objectivity: knowledge independent of epistemic criteria specific to particular persons or groups.

This result weaves together sociological and philosophical accounts of science, explicating the epistemic ideal of objectivity in relation to social aspects of scientific practice. An entrenched dualism between normative (evaluative), vs. descriptive (comparative) approaches to scientific knowledge is thereby undercut; philosophy and sociology are recast as collaborating participants in articulating social epistemology of science.

 

UPTAKE FROM THE COMMONS: TRACKING THE ACCESS AND USE OF PUBLIC DOMAIN DATA RELEASED BY THE C. ELEGANS GENE KNOCKOUT CONSORTIUM

Lily Farris

The University of British Columbia

Garret Hardin raised concerns in 1968 that without private property rights for such resources as pasture land, individual herders would overgraze their herds destroying the opportunity for herders to reap the benefits of shared lands or commons. In 1998 Heller and Eisenberg pointed out that the opposite could occur where exclusive rights over a single resource was given to different right-holders (Merges, 2006:186; Heller and Eisenberg, 1998) which has become a real concern in biotechnology due to the fragmentation of innovation that some argued would inhibit innovation in particularly patent heavy scientific areas. In science some have chosen to release their data and scientific resources publicly available to pre-empt property rights claims over potentially valuable inputs to the research process or to increase the pool of potential users who can take up and engage with these materials. While some point out that the threat of the anti-commons has not materialized, we focus here on Hardin’s commons exploring how publicly available resources could be taken up by the public. We present evidence that creating publicly available scientific resources can foster further scientific innovation through presenting evidence of the uptake of openly available resources. Specifically, we track the access (requests to use the resource) and use (published references to the use of the resource) of C. elegans Gene Knockout Consortium C. elegans knockouts. Using the C. elegans Gene Knockout Consortium knockouts as a case study of how one particular publicly accessible resource is taken up by the community of potential scientific users (C. elegans researchers) we provide an example of how an accessible resource can foster further scientific work and innovation. The C. elegans Gene Knockout Consortium, founded in 2001, was created to centralize an effort to mutate every gene in C. elegans, a round worm (Barstead and Moerman, 2008). With a complete C. elegans sequence in hand (also publicly accessible) the idea of mutating every gene in the roundworm emerged (Barstead and Moerman, 2008). The results of the GKC effort are centrally housed at the Caenorhabditis Genetic Center (CGC or “Stock center”), a publicly funded (NIH) repository for nematode strains in St. Paul, Minnesota. All strains in the repository are frozen and stored in a single location, but are available to any researcher upon request (for a nominal fee). In the case of the C. elegans Gene Knockout Consortium, the resources that are made publicly available become a part of the scientific or intellectual commons (Drahos, 2006; Frequently Asked questions: What Is Creative Commons?) so by tracking the requests for this strains and academic publications that reference these strains serves as evidence that this resource taken up and used, thus acting as a common resource.

 

ASSASSINATION SCIENCE: CRITICAL THINKING IN POLITICAL CONTEXTS

James H. Fetzer, Ph.D.

University of Minnesota Duluth

As a professional philosopher of science who has devoted his career, in large measure, to elaborating and defending an abductivist model of science that is based upon inference to the best explanation, it was perhaps inevitable that my interest in political assassinations, such as that of President John F. Kennedy, would invite the use of the principles which define that model to cases of this kind. In collaboration the most highly qualified experts to ever investigate JFK--a world authority on the human brain, who was also an expert on wound ballistics; a Ph.D. in physics who is also an M.D. and board- certified in radiation oncology; a physician who was present when the President was pronounced dead at Parkland Hospital in Dallas and who was responsible for the treatment of his alleged assassin; a legendary photo-analyst who specializes in assassination photos and films; and another Ph.D. in physics with a specialization in electromagnetism--we have focused our attention on separating the authentic evidence from the inauthentic, during our investigation.

According to the abductivist conception, science is a process that involves four stages, namely: puzzlement, speculation, adaptation, and explanation. Adaptation involves comparisons between possible explanations using likelihood measures of evidential support, which are based upon calculations of the probability of the evidence, if the hypotheses were true. The law of likelihood affirms that those with higher likelihoods are preferable to those with lower, where the hypothesis that has the highest likelihood becomes acceptable when the evidence "settles down". In this case, the lone-assassin hypothesis is compared with its conspiracy alternative, where the "conspiracy hypothesis" posits that more than one individual was involved in the actual shooting or in covering up the true causes of the death of JFK. Some of our most important discoveries have revolved about the autopsy X-rays, the brain shown in diagrams and photographs in the National Archives (the original is missing), and the home movie of the assassination known as "the Zapruder film".

We discovered that the autopsy X-rays had been altered; that the images of the brain are inconsistent with multiple reports by the physicians at Parkland; and that the Zapruder film excludes a wide variety of events that were reported by witnesses to have occurred in Dealey Plaza during the shooting. This presentation discusses the application of inference to the best explanation to these and other data points in our efforts to take rumor and speculation out of the case and place its study upon an objective and scientific foundation. Once the authentic evidence has been differentiated from the inauthentic, discriminating between these two alternative hypotheses turns out to be straightforward, from a logical point of view, where one of them can be established beyond a reasonable doubt, since no alternative explanation appears to be reasonable.

 

CRITICAL CONDITION: CAN FEMINIST ACCOUNTS OF EVIDENCE REHABILITATE EVIDENCE-BASED MEDICINE?

Maya Goldenberg,

University of Guelph

As the name “evidence-based medicine” should suggest, the notion of evidence is central to current conceptions of good biomedical research and practice. This invites reflection from philosophy of science and medicine on the nature of evidence and its role in the justification of medical knowledge. Numerous critiques of the evidence based movement hinge on what is seen to be an inappropriately objectivist underlying conception of evidence. The evidence based programme has been faulted for supporting a questionable and seemingly innocuous technique of deferral to the evidence that obscures the multiple and complex considerations that unavoidably go into healthcare decisionmaking. The critics do not want to abandon evidence, but desire a more “honest” process for incorporating clinical research into patient care that recognizes the social dimensions of medicine and the multiple facets of knowledge that inform the discipline. The goal of rethinking evidence is to provide a more nuanced account that will correct the shortcomings of the evidence based approach to medicine.

Post-positivist epistemologies of science productively inform these debates by offering naturalized, pragmatic, holistic, and feminist accounts of evidence. Feminist research is particularly interesting and invested in the question of evidence, as the construction of evidence has been tied to certain feminist causes and concerns in science studies. In this presentation, I want to examine alternative feminist accounts of evidence offered by Lynn Hankinson Nelson (1990; 1993), Helen Longino (1990), and Sharyn Clough (2003a; 2003b), and consider which frameworks are best able to respond to the specific question of evidence that has arisen in the evidence-based medicine debates.

I will argue that I find the epistemic holism in both Longino’s contextualized account of evidence and Nelson’s naturalized theory to appropriately broaden the category of what warrants evidentiary consideration – to include, say, certain values and political commitments. Yet this broadening diminishes the adjudicative force of evidence from what is offered in correspondence relationships between objects and its observers. While this might be an appropriate epistemic limit, it does not provide health care practitioners with a workable framework for making pressing clinical decisions.

Clough offers an explanation of and remedy to the problem that I have just described. She reads previous feminist writings (Longino specifically, and I extend her critique to Nelson) to relativize evidence. Influenced by Donald Davidson and Richard Rorty, she maintains that regardless of whether one posits a priori or concludes a posteriori (through naturalized inquiry) the existence of an independent external world, a separation between “the objective” and “the social” (or “content” and “scheme”) is created. Borrowing Davidson’s critique of “representationalism”, she charges coherentist theories of evidence as maintaining the same problematic metaphysical dualism as do the positivists. Her solution is, following Davidson, is an anti-representationalist schema where, she argues, there is no filter that separates our representations and the world, yet the schema still manages to incorporate the anti-objectivism of critical science studies. Recognizing the Clough is offering precisely what the EBM critics need in an alternative account of evidence, I evaluate the success of Clough’s theory of evidence in redeeming the adjudicative force of evidence while still incorporating the lessons learned from postpositivist science studies by bringing this framework to bear on the question of evidence in evidence based medicine.

 

HARMONIZING MODELS AND PHENOMENA: THE CASE OF AFLATOXIN

Marta Halina

University of California

In the spring of 1960, a mysterious disease infected thousands of turkeys in England. This outbreak spurred a flurry of biological and medical investigations aimed at determining the causal agent behind it. The agent was soon identified as “aflatoxin,” a compound produced by the fungus Aspergillus flavus, and found in the feed of the affected turkey population. Researchers studying aflatoxin quickly realized that it was present in various food sources, including those of humans. Determining the conditions under which aflatoxin was harmful to organisms became a pressing problem in biological research at this time.

Establishing the effects that aflatoxin had on organisms was not a straightforward task. While some animals quickly developed symptoms upon being fed aflatoxin, other animals appeared unaffected. By 1968, at least 20 different species had been tested for their susceptibility to aflatoxin, and while animals such as the guinea pig, duckling, and rat were susceptible, others like the mouse, sheep, and chicken were not. These inter-species differences guided researchers in their attempt to build a model of aflatoxin toxicity. That is, the main criterion for determining a model’s success was accounting for these differences in susceptibility.

Aflatoxin research from the late 1960s to the early 1980s illustrates an aspect of scientific practice that has received little attention in the philosophical literature. In particular, it shows how the application of inaccurate models can contribute to the process of discovery and model building. Researchers studying aflatoxin toxicity proposed several models that were unable to account for the inter-species differences in aflatoxin susceptibility. However, rather than reject these models, investigators modified them piecemeal in ways that made them successful.

Not only can the parts of a model be modified, but the assumptions behind that model can be reevaluated as well. When the toxic effects of aflatoxin were first recognized in the 1960s, researchers worked to develop a model that would apply to all organisms under investigation. However, by the early 1980s, only a subset of these organisms was considered relevant. Researchers circumscribed the model’s intended domain of application in order to maintain its success.

The details of aflatoxin research reveal a harmonization process in which researchers simultaneously assemble models piecemeal while determining which objects those models should describe. Understanding the construction and application of models in biological domains will require more attention to this process of co-development.

 

SCIENCE, SOCIETY, AND REINFORCED INTOLERANCE OF MENTAL ILLNESS

Susan C. Hawthorne

Mt. Holyoke College

Good intentions can go awry. A generation or so ago, children who had high activity levels, impulsivity, and lack of attention were understood as “naughty”; adults with similar symptoms were “failures.” Current psychiatric and neuroscientific concepts interpret such behaviors, when impairing, as symptoms of the mental disorder attention-deficit/hyperactivity disorder (ADHD). This revised view, and the scientific and clinical enterprise that has worked to investigate and “manage” ADHD, has in many ways succeeded in reshaping the understanding of ADHD-associated behaviors from moral failing to medical/biological phenomenon. Under the new concept, responsibility and blame for the behaviors deemed undesirable is largely attributed to biology, and management emphasizes treatment or accommodation rather than punishment. This transformation in attitude is often described as successfully reducing the stigma directed toward ADHD-diagnosable individuals. I argue, however, that today’s stigma is different, but not reduced. Rather, intolerance of ADHD-associated behaviors has been reinforced via the interplay of scientific, medical, social, and individual goals. Although this paper details only the example of ADHD, a similar case can be made for other relatively mild mental illnesses, such as the various “shadow syndromes,” dysthymia, and nonsevere depression.

To make the case in the example of ADHD, I need to explain what I mean by intolerance, and I need to show that recent and current practices reinforce it. I begin with the latter issue, with a goal of showing, specific to the case of ADHD, the strong mutual of influences of society on ADHD science (the task of Part 1), and of ADHD science on society (Part 2). The analysis of the mutual influences builds on work of Anderson, Elliott, Hacking, Longino, Putnam, Rouse, Sadler, and others who have considered the influence of values on science (and science on values), and the important effects of scientifically-changed norms and possibilities on individuals and society (and vice-versa) (Anderson 2004; Elliott 2003; Hacking 1995; Longino 1990; Putnam 2002; Rouse 1987; Sadler 2002). I argue that the mutual society/science influences have created and enforced norms according to which we judge each other, and according to which we guide social and science policy. In Part 3, I illustrate ways in which these norms affect individuals and their decisions, and draw attention to the pressures exerted on ADHD-diagnosable individuals or proxies who disagree with the current understanding of ADHD. The norms retain the disvaluation of ADHD-associated behavior that characterized earlier concepts. In addition, the broad influence of the current ADHD concept and the practices surrounding it leaves few options for ADHD-diagnosable individuals outside mainstream recommendations for diagnosis and treatment. This combination of disvaluation of ADHD-associated behaviors and limited options for ADHD-diagnosable individuals constitutes a form of intolerance. I conclude by suggesting that the patterns of mutual influence displayed in the ADHD case affects understanding of other psychiatric conditions, and the people who “have” them, as well.

 

STYLES OF EXPERIMENTING AS AN ANALYTICAL CATEGORY FOR SCIENTIFIC PRACTICES

Peter Heering

Carl-von-Ossietzky Universitaet Oldenburg

During the last two decades, our group applied the so-called replication method as a central approach to develop an understanding of historical experimental practice. In reconstructing historical instruments and redoing historical experiments, we have developed a specific approach to analyse experimental practice. From this analysis, different approaches to describe this practice have been developed.

In the presentation, I am going to discuss the concept of style of experimentation that serves as an attempt to discuss experimental practice. This concept can be seen as an expansion of Fleck’s conception of ‘Style of thought’ and ‘thought collective’. Fleck’s approach has recently regained attention, yet, experimental practice is not a major aspect of his analysis of the production of scientific knowledge. This deficit can be explained when the standards of the time of Fleck’s work are taken into consideration, however, in order to meet the actual understanding of knowledge, an expansion of Fleck’s conception appears to be necessary.

In describing the concept of style of thought, I will discuss two case studies from the late 18th century. On the one hand, I am going to discuss the experiments carried out by Jean Paul Marat, a physician who was to become a radical journalist during the French revolution. During the 1780s, Marat attempted to establish himself as a natural philosopher but failed to do so. As a result, his experiments appear nowadays as different and unfamiliar. Consequently, these experiments get more familiar when one attempts to redo them.

Things are different with respect to the second example I am going to use. Coulomb’s experiments were carried out in the same period as Marat’s, yet, they became canonical. Consequently, their accounts appear familiar even nowadays. Yet, analysing them with the replication method helps to make them more unfamiliar and thus help to develop a more thorough analysis.

As a result of this analysis, it becomes possible to compare two different styles of experimentation and thus to show the analytical potential of this conception.

 

REALISM AND THE BULLET CLUSTER

Robert Hudson

University of Saskatchewan

In 2006, the astrophysicists Douglas Clowe and his colleagues made the startling claim to have discovered by inspection of the cosmological phenomenon, the ‘Bullet Cluster’, direct empirical proof of the existence of dark matter, the mysterious hypothetical substance thought to make up about 25% of the total constitution of the universe. My initial task in this paper is to explore what these astrophysicists mean in saying that a theoretical construct, dark matter, is ‘directly observed’ (a process that for them uses both X-ray observations and gravitational lensing). As such, I formulate the following analytical definition of ‘direct evidence’: given two evidential claims (claims expressing evidence for a proposition), the first evidential claim is ‘direct’ relative to the second evidential claim if the set of background information that underpins the first claim is a subset of the set of background information that underpins the second claim. Accordingly, since the first claim is less vulnerable to refutation than the latter claim (as it is less dependent on background assumptions), we can say that direct evidence is stronger evidence for a hypothesis than indirect (that is, less direct) evidence. It is in this sense that Clowe et al say that they have direct evidence for the existence of dark matter: they have evidence that is ‘direct’ as compared to the other main line of support for dark matter that involves inferring the existence of dark matter based on observations of galactic rotation curves. The main opponents to the existence of dark matter, Moti Milgrom and John Moffat, have argued that they can equally well explain galactic rotation curves without referring to dark matter. With direct evidence for dark matter, Clowe et al believe they can out-maneuver these anti-dark-matter arguments.

After analyzing Clowe et al’s claim to have direct evidence for dark matter, I examine what implications their approach has for the assessment of scientific realism. As is well known, one of the main pillars of support for scientific realism is the miracle argument which grounds a realistic interpretation of scientific beliefs on the empirical success of these beliefs. Yet such argumentation has proved unsatisfying to a number of astrophysical researchers since it leaves room for contrary views, such as Milgrom’s and Moffat’s, that are opposed to the existence of dark matter. In this context, Clowe et al claim that they have direct evidence for the reality of dark matter that can avert the weaknesses of inferential arguments, and motivated by their claim I respond to three powerful criticisms that have been posed against scientific realism, criticisms based on three problems: 1) the problem of theoretical underdetermination, 2) the ‘pessimistic meta-induction’, and 3) the ‘problem of unconceived alternatives’. My strategy is to examine each of these problems, explaining first why they are thought to pose a threat to the miracle argument and thus to scientific realism, and to then show how each can be effectively responded to on behalf of scientific realism using the strategy of direct evidence as outlined here.

 

EVIDENCE FOR USE: THE CASE OF THE HPV VACCINE

Kristen Intemann

Inmaculada de Melo-Martín

There is growing consensus that the aims of science can depend on social aims (Kitcher 2001; Solomon 2001; Longino 2002; Kourany 2003). Kitcher has argued that the main goal of science is not merely to discover truths about the world, but particularly significant truths, which are determined by human values and interests (Kitcher 2001, 44). Yet some assume that this only has implications for which research programs to pursue and not for scientific methodology or standards of evidence. Kitcher (2001), for example, focuses only on how to democratize decisions about research priorities. Similarly, Solomon (2001) is concerned with promoting a fairer distribution of research effort among empirically adequate research programs driven by different values and interests. This assumption reinforces the position that values can play a legitimate role in the context of discovery, but not in the context of justification.

We argue that when the aims of research depend on social values, such values not only have implications for research priorities but also help justify methodological decisions and standards of evidence. Using the case of the recently approved human papillomavirus (HPV) vaccines, we show that social, as well as epistemic, aims of the research play an important role in methodological decision-making during basic research and in the design and execution of clinical trials. In particular, social aims of research are relevant to justifying decisions about 1) how research problems are defined (including parameters for solutions) in drug development, 2) evidentiary standards used in testing drug “success”, and 3) clinical trial methodology.

If, for example, one takes the goal of the HPV research to be a reduction in cervical cancer morbidity and mortality among vulnerable populations, the recently approved vaccines will do little to further that aim. While clinical trials show the vaccines are efficacious in preventing persistent HPV infection and ultimately cervical cancer in highly controlled laboratory conditions, there is little evidence the vaccines will be effective in significantly reducing the incidence of cervical cancer among the population most likely to develop it. This is because 83% of cervical cancers are found in developing countries where the vaccines are unlikely to work. The cost of the vaccine, the fact that it needs to be administered in three shots over six months, its need for refrigeration, and the fact that they only protects against high-risk HPV types that are more prevalent in the U.S. and Europe, render the vaccines less effective for the populations most at risk for cervical cancer. Thus, how the social aims of the research are conceived (whether the concern is to decrease the morbidity and mortality caused by cervical cancer in industrialized nations or to reduce worldwide incidence of cervical cancer) has implications for the rationality of methodological decisions and standards of evidence in drug development and testing. We conclude that evaluating and endorsing particular social aims, as well as reasoning about how the social aims of research can be best promoted, is crucial to producing more socially responsive and useful scientific research.

 

DATA PROCESSING IN OBSERVATION

Vincent Israel-Jost

IHPST, University of Paris

Although the observational status of data produced by instruments has been widely discussed among philosophers of science, those who defend it (e.g. Shapere (1982), Hacking (1983), Humphreys (2004) and many others) still do not completely account for contemporary practices of observation. Indeed, these data are very often computationally processed before they are examined by scientists and tend to be more and more so, as most detectors now produce data in a digital form. Hence, the raw data (the data detected and not yet modified) are stored as matrices or vectors that can easily be mathematically processed.

In addition, while computational data processing shares important features with simulations, as both practices are based on the solving of equations associated to models, philosophical analyses of simulations (e.g. Humphreys (1994 and 2004), Hartman (1996)) are unable to account for data processing in observation, for in these different studies of simulations, the model aims to describe the phenomenon which is precisely at the center of scientific investigation. On the contrary, in data processing for observation, scientists make use of two types of models which are both neutral regarding the studied object or phenomenon. The first type of models concerns the different steps of data acquisition, and permits the scientist to predict the data corresponding to a given phenomenon. When used the other way around in an inverse problem, this type of treatments allows to specify the original phenomenon from the data, in a greater purity or in a spatial representation that can be grasped more easily by the observer. Hence, one can "deblur" (or deconvolve) images that are blurry due to a detector that is not accurate enough (e.g. in microscopy) or give a 3D representation of a phenomenon for which we originally could only produce 2D images (e.g. in CAT-scan imaging.) The second type of models, which deals more specifically with images, aims to describe some mechanism of vision such as the demarcation (or segmentation) of objects or the simplification of images, for example by making homogeneous some regions which are not so, but that we would tend to see as such. This permits to facilitate the reading of images and to obtain a better correlation between what two different observers see.

While the inferential nature of the treatments applied to data is not dubious (see Delehanty (2005) for positron emission tomography (PET) images), the fundamental distinction in the context of observation is not between inferential and non-inferential, but rather between inferences which concern the very object of the scientific inquiry, and those which concern data acquisition and perception processes, since only the two latter types of inferences can be compatible with observation. More specifically, I shall argue that computer treatments which involve models of data acquisition do not bring any additional difficulty regarding the observational status of data, compared to the raw data produced by the same instrument, since the treatments only make explicit use of the knowledge of processes that the observer already adheres to (explicitly or implicitly). By implementing this knowledge in a systematic and reliable way, they also permit one to reduce the gap between (raw) data and phenomena. However, the role of treatments that make use of models of perception in observation is much harder to defend, since the resulting images often lack many of the original features of the raw data.

 

EXPERIMENTS IN THE SOCIAL SCIENCES: THE RELATIONSHIP BETWEEN EXTERNAL AND INTERNAL VALIDITY

María Jiménez-Buedo

UNED

Luis M. Miller

Oxford University

In the last two decades the debates around the worth of the experimental method in economics, and in general, in the social sciences, have been many, heated, and salient both for practitioners and methodologists. This is a consequence of the consolidation of the experimental method as a valid tool for economic research which, as a side effect, has reopened the discussion about the benefits and drawbacks of laboratory experiments in the social sciences. Despite the proliferation of debates, different authors have hardly agreed on one of the main concerns of experimental methodology: the problem of external validity, and in particular, its relation with internal validity.

In relation to the validity problem, experimental economists, as well as the majority of experimental social psychologists and sociologists have followed the seminal works of Donald Campbell and his collaborators (Campbell and Stanley, 1963; Cook and Campbell, 1979). Quoting their classical definitions (Cook and Campbell ,1979: 37), internal validity ‘refers to the approximate validity with which we infer that a relationship between two variables is causal or that the absence of a relationship implies the absence of cause’, and external validity ‘refers to the approximate validity with which we can infer that the presumed causal relationship can be generalized to and across alternate measures of the cause and effect and across different types of persons, settings, and times’. Starting from these definitions, it is commonly assumed that there is a tension between both sources of validity. For example, Cook and Campbell (1979: 82), in their discussion about the relation between internal and external validity put it in the following way: ‘Some ways of increasing one kind of validity will probably decrease another kind’. We can find references to this tension in both social psychology (Brehn et al., 1999; Smith and Mackie, 1999) and economics texts (Guala, 2005). At the same time, this received-view coexists with positions that claim that the problem of internal validity is chronologically and epistemically antecedent to problems of external validity or even that the question of external validity is irrelevant to many types of experiments, in particular those that are theory-oriented (Guala, 2003; Thye, 2000, Kanazawa,S. 1999).

We first underline the existence and contours of this important yet mostly implicit debate, between those that portray the relationship between external and internal validity as a trade-off one and those who think that the internal validity of an experiment is a prerequisite of its external validity. In doing this, we aim at problematizing the distinction between internal and external validity as is currently defined both in the philosophical literature on experiments and in the discussions that experimenters do of their work. Drawing on several well-known public goods experiments as case-studies, we base our argument on a classification of some of the different definitions of external validity and internal validity available in the literature and its operationalisations in different experimental designs. Our analysis suggests that there are no grounds to posit a general relationship between internal and external validity and that this relationship ultimately depends logically, on the definitions of both type of validity favored by the methodologist, and empirically, on the goals of the experimenter.

 

IN SEARCH OF TOOLS TO BRIDGE THE GAP BETWEEN SCIENCE AND POLICY MAKING. -ON THE NOTION OF RESEARCH PROGRAMMES IN SUSTAINABLE DEVELOPMENT DEBATES.

Marc Kirsch

Collège de France, Paris

The main focus of this paper deals with the way in which concepts borrowed from philosophy of science can be used to describe scientific landscapes in cases where science is needed for decision-making and for designing policy, in contexts of complexity and uncertainty, and in presence of competing or complementary models or theories.

Our case study focuses on the gap between science and public decision-making in the field of sustainable development, namely in the design of development policies that combine agricultural development with biodiversity conservation. This gap may result from different causes. Some are external: for instance, organizational shortcomings in scientific institutions, weak involvement of non-academic actors in research processes, etc. The ways to bridge this gap that are often suggested are predominantly sociologically-oriented (e.g. Post-normal science or “Mode 2” knowledge production as described by Gibbons et al., 1994; Nowotny et al., 2001). But the gap may also have internal causes, especially because of the proliferation of scientific production, which makes it very difficult for an individual researcher or policy-maker to master, or even to have a rough overview, of the knowledge available in one scientific field or concerning any issue involving a certain degree of uncertainty and complexity.

In the medical field, the attempt to build meta-knowledge in order to cope with this blooming production of knowledge and to keep trainees up to date with the last and best knowledge and treatments available for clinical practice has led to the development of an evidence-based approach which has spread to other social domains, like justice, education, and public decision-making in general. Thus, whilst analyzing the interest of evidence-based approaches for designing agro-environmental policies, researchers (Laurent & Baudry, Allsopp) also try to classify existing relevant knowledge and to compare the conceptual architectures of coexisting theories. They use Lakatos’ concept of “research programmes” as an instrument to describe and characterize scientific theories, arguing that this concept is appropriate for describing a diversity of theories that coexist at a given moment in time. It proposes a pattern for describing theories which is applicable to all approaches Furthermore, compared to Thomas Kuhn’s concept of paradigm, it provides a description which is less closely tied to the social aspects of scientific activity and to the dominance-status of a theory in a particular scientific field.

We shall examine this approach and the way it is applied to ecology and economics in order to provide a description useful both for research and practice.

 

ARE PROOFS MATHEMATICAL EXPERIMENTS? ARE MATHEMATICAL EXPERIMENTS PROOFS?

Henrik Kragh Sørensen

University of Aarhus, Denmark

From a philosophical viewpoint, mathematics has often and traditionally been distinguished from the natural sciences by its formal nature and emphasis on deductive reasoning. Experiments – one of the corner stones of most modern natural science – have had no role to play in mathematics.

However, during the last three decades, high speed computers and sophisticated software packages such as Maple and Mathematica have entered into the domain of pure mathematics, bringing with them a new experimental flavour. They have opened up a new approach in which computer-based tools are used to experiment with the mathematical objects in a dialogue with more traditional methods of formal rigorous proof. At present, a sub-discipline of experimental mathematics is forming with its own research problems, methodology, conferences, and journals.

Elsewhere, I have argued that the epistemological claims for experimental mathematics could profitably be updated by discussing recent ideas connected to exploratory experimentation in the sciences. In this paper, I wish to continue discussing the relations between experiments in mathematics and in the sciences.

In 2008, a number of papers were published undertaking the philosophical clarification of the meaning of experimental mathematics. For instance, Van Kerkhove and Van Bendegem sought to argue for an “irreducible role in mathematics for genuine induction” (in Erkenntnis vol 68(3), p. 424). Alan Baker argued that “a literal reading of ‘experiment’ in the context of clarifying the nature of experimental mathematics, is unfruitful” (in Erkenntnis vol. 68(3), p. 339). Arguing against other views, Baker suggested that the central feature of experimental mathematics is the calculation of instances of general hypotheses.

Much of the philosophical debate has been concerned with arguing for a special role for “genuine” induction in mathematics. Less attention has been put in comparing the processes of experimental mathematics to that of conducting thought experiments in search for (ordinary) mathematical proofs. This will be my main concern in this paper.

Thus, in the first part of this paper, I briefly outline the impact of high speed computing on experimental mathematical research. I then consider some of the epistemological claims put forward within experimental mathematics. In particular, I investigate positions vis-à-vis the need for formalised proofs of experimentally obtained results, where two of the proponents of experimental mathematics, Jon Borwein and Doron Zeilberger, fundamentally disagree.

In the second part of the paper, I draw upon discussions of the relations between proofs and (thought) experiments going back to Wittgenstein’s Lectures on the Foundations of Mathematics and Lakatos’ ideas about proofs as thought experiments. After outlining central features from Wittgenstein and Lakatos, I analyse the parallels between experimental mathematics and ordinary proof seeking in mathematics. Thereby, I investigate a fruitful approach to philosophically studying experimental mathematics and analysing the claims of experimental mathematicians to knowledge production.

 

MATHEMATICAL REALISM: A VIEW FROM INDUSTRY

Chris Mack

The University of Texas

While mathematical realism versus intuitionism or constructivism, etc., is a lively topic of research and debate among philosophers, it is rarely discussed or even carefully considered by practicing scientists. Still, most scientists seem to bring a mostly non-realist perspective to their work: circles and triangles are convenient mathematical fictions that are useful because they approximate real shapes found (or created) in the empirical world. The extreme usefulness of mathematics as the language of science, however, often leads to certain level of realism in the use of mathematics in scientific practice, exhibited by the almost unquestioned belief that mathematical concepts can always be used to describe empirical observations and their theoretical interpretations [1]. It is not at all clear, however, that such faith is justified.

This paper will discuss the author’s experience of a specific instance where mathematical realism was confronted by the practice of science and engineering in the use of metrology for semiconductor manufacturing. While developing software algorithms to analyze scanning electron micrograph (SEM) images of sub-micron sized semiconductor patterns, attempts to measure the circumference of a pattern with a desired precision met with utter failure. Investigating the matter, it became clear that the mathematical concepts of circumference and surface area were in fact unmeasureable on any physical object without the use of ad-hoc rules. Interestingly, the measurement of volume does not suffer from this limitation. Based on this experience, a definition of “measurably real” can be applied to certain mathematical concepts:

A mathematical entity applied to an empirical entity is “measurably real” if, in the limit of smaller and smaller measurement resolution, the quantity converges to a finite number with a finite error estimate.

It will be shown that circumference and surface area do not meet this definition of “measurably real”, where as volume does.

The relationship between these concepts and the well-known ideas of fractal geometry will also be discussed. Finally, the real-world impact of these ideas on the practice of semiconductor metrology will be presented, having influenced the now-accepted standard definitions of metrology terms used by that industry [2].

1. Wigner, E. P.: 1960, ‘The Unreasonable Effectiveness of Mathematics in the Natural Sciences’, Communications on Pure and Applied Mathematics 13, 1–14.

2. SEMI Standard P35-1106, “Terminology For Microlithography Metrology”, published in 2006.

 

A PHILOSOPHY OF EVIDENCE RELEVANT FOR REGULATION

Deborah Mayo

Virginia Polytechnic Institute and State University

The aim of my paper will be to address the question: What is needed for a philosophy of evidence that is relevant to regulatory practice and to disputes about evidence-based policy?

To begin with, we need accounts of evidence that are adequate for critically appraising

methodological entanglements (of science and policy). It will not do to stop with vague "logics" of confirmation, or probabilistic inference; philosophers (be they formal epistemologists or other) should confront questions of how to actually apply their methods and how these applications relate to issues of evidence in practice.

Illustrating with examples of medical and environmental risks, I show how some of the thorniest problems of methodology intermingle risk policy with basic issues of the collection and interpretation of statistical evidence. The lack of a critical understanding of these issues enables opposing sides of a dispute to criticize, on "scientific" grounds, the statistical inferences on which risk assessments (and policy) are based. Advocates of rival positions, even with shared evidence, are able to accuse each other of being guilty of "junk science". An adequate philosophical scrutiny, if it is to be more than a reflection of our policy preferences, needs to address these issues. It does not suffice to consider a case study---even armed with expertise in statistics or modeling -- since the central problems often turn on foundational debates, or inadequate understanding, or misuses, or misinterpretations of these very tools. Without the basis for a "meta-level" critique of these issues, there is a danger of simply signing on to one of the rival positions that exist in practice e.g., always (never) use: this type of experimental design (e.g., randomized-control trials), algorithm, extrapolation model, data mining technique, interpretation of statistics etc. This would seem to forfeit a central role for philosophy of science: to scrutinize the conceptual, logical, and evidential discomforts of others. My recommendation is not that philosophers of science become "experts" (e.g., in statistics, toxicology, epidemiology, or law)---this would be too much, but also too little: we are often dealing with disagreements among experts. Nevertheless, a sufficient understanding of the methods together with a platform for raising questions about fallacies and pitfalls-building on interdisciplinary work by philosophers and practitioners-- could promote a philosophical/methodological account with real bite. In the risk assessment arena, the kind of critical "metascientific" analysis I have in mind might revolve around questions of "risk assessment policy (RAP)" judgments: How do various methodological choices made in the generation, modeling and interpreting of data alter the ability of the analysis to detect risks (or benefits) of various types?

Of special interest are the regulations on which stakeholders base critical appeals of evidence regarding risks/benefits, e.g., the Data Quality Act (DQA). While these critical appeals may charge that there are errors that render given inferences flawed (or even "junk science"), our (philosophical/methodological) job might be to "criticize the critic". I will consider some implications for education: both in philosophy of science and in research methods courses.

 

A PHILOSOPHICAL FRAMEWORK FOR PATIENT-REPORTED OUTCOME MEASURES

Dr. Leah McClimans

University of South Carolina

Patient-reported outcomes are meant to provide information about the way patients collectively understand their health and quality of life in the face of illness or medical intervention. As such, they are meant to provide the patients’ perspective on clinical outcomes (NHS, 2008). Nonetheless, it is widely held within the literature on Patient-Reported Outcome Measures (PROMs) that they lack a robust theoretical account and that this lack affects their ability to justify such claims (Hunt, 1997; Hobart, 2008; Presidential Address ISOQoL, 2008). In this paper I begin to flesh out a theoretical framework for PROMs drawing on work in philosophy of science and social science that concerns interpretation and the logic of asking questions.

Following the work of philosophers such as Thomas Kuhn and Hans-Georg Gadamer I first argue that the logic of asking and understanding questions has the structure of a circle: we ask questions about a subject matter that we do not fully understand and as a result the questions we ask are open to a certain amount of reinterpretation. We come to a better understanding of the meaning of our questions and answers as we come to understand the subject matter. There are at least two consequences that follow from this characterization of questioning. Firstly, questions cannot be standardized. To do so is to assume that we know more about the subject matter than the act of questioning suggests. Secondly, we can always learn more about a particular subject matter, new questions have the capacity of opening up new perspectives on a subject and transforming our understanding of it.

I then argue that PROMs, which consist of questions along with a selection of possible answers, ought to be conceived in terms of this same circular structure. To support my argument I examine a series of empirical studies. I begin with a study of patient perceptions of cataract surgery to illustrate that the constructs that PROMs assess are vague before patients answer questions about them. I then turn to qualitative studies that look at how patients understand the individual questions posed in these measures. Although researchers assume that these questions can be standardized, numerous studies suggest that patients regularly understand these questions in a variety of ways. I argue that there is a connection between the vagueness of the research construct and the variety of ways that patients understand these questions. I also argue that the different understandings that patient’s bring to these questions are often fruitful; they help us to better understand the constructs under investigation.

This account of PROMs is at odds with the conception of science to which most epidemiologists and health service researchers adhere. Nonetheless, it resolves certain methodological problems that continue to worry researchers, problems such as validity and response shift, moreover, it makes it clearer just how PROMs might authentically provide a patients’ perspective on clinical outcomes. But it also creates new problems regarding evaluative adequacy: how do we determine which understandings of the questions and the construct are legitimate and which are not? I end my discussion with some thoughts on how we might deal with this issue.

 

PRAGMATIC RECOMMENDATIONS FOR DOING SCIENCE WITHIN ONES MEANS

Amy L. McLaughlin

Florida Atlantic University

This paper investigates Charles Peirce’s pragmatism, especially as it relates to questions of how we should conduct scientific inquiry. Two separate but related issues are considered here. First, I consider what, according to Peirce, would be the optimal form for inquiry to take. Second, I consider how, given the model of inquiry Peirce recommends and his considerations about the economy of research, inquiry can be conducted so as to accommodate practical constraints.

Peirce introduces a model of inquiry in attempt to demarcate appropriate methods of inquiry from specious ones. Cheryl Misak, in her Truth and the End of Inquiry, has pointed out that Peirce’s account does not quite do the job required of it. While Misak’s criticism is a propos, her own attempt to fortify Peirce’s account does not succeed, as it falls prey to precisely the criticism she raises against Peirce’s explicit account. The account provided in this paper—the ‘open path’ alternative—draws from Peirce’s corollary to his “first rule of reason”, that one should not block the road to inquiry. The ‘open path’ account is able to withstand Misak’s objections, and when combined with other aspects of Peirce’s work, shows us why the optimal way to conduct inquiry is to follow the path of greatest resistance.

Inquiry is rarely, if ever, conducted in optimal conditions, however. It is conducted in actual, constrained conditions, which require a measure of economy in terms of what can be reasonably pursued and how. The question, then, is how to conduct our inquiries so that they are as nearly optimal as possible given actual constraints. Having worked for many years as a scientist, Peirce was deeply concerned with the question of how to conduct research so as to make the best use of resources. Nicholas Rescher observes that the economy of research plays an important role for Peirce, and that the significance of this has not been sufficiently appreciated. But, Rescher does not develop this aspect of Peirce’s philosophy of science in any detail. The thrust of Peirce’s claims is this: Considerations of economy indicate that the best way to gather evidence for a hypothesis is to ascertain whether empirical consequences that would not have been expected otherwise yield positive experimental results. Much of the work of the paper is concerned with giving clear explication of what Peirce’s recommendation amounts to, and how it can be used to good effect. To facilitate this discussion, I focus only on the scientific context where one is gathering evidence to decide among competing hypotheses, leaving aside questions about formulating hypotheses in the first place. The view that is developed in this paper is intended to synthesize some of Peirce’s recommendations for the practice of science in the above-specified context. The account here makes use of both Peirce’s general theory of inquiry and how ‘resistance’ is invoked, as well as his considerations about how to navigate within the constraints introduced in the context of actual research (in terms of time, funding, available experimental apparatus, etc.).

 

DEMOCRATIZATION OF RISK ASSESSMENT OF CONVERGING TECHNOLOGIES

Zahra Meghani

University of Rhode Island

Jennifer Kuzma

University of Minnesota

The convergence of emerging technologies has the potential to make risk assessment more difficult and complicated than it might be otherwise. As emerging technologies (such as nanotechnology, biotechnology, informational technology and cognitive technology) merge, they may interact with one another in unanticipated ways, creating new, complex, and multi-faceted risks. Given that possibility, it is tempting to leave risk evaluation of converging technologies to the experts. However, it would be a mistake to do so because any risk assessment endeavor is not just an epistemic activity, it also has normative dimensions. For instance, normative judgments have to be made in each of the four stages of the human health risk assessment process. During hazard identification, dose-response assessment, exposure assessment, and risk characterization, decisions have to be made that involve values and concerns that are non-epistemic in nature. While risk experts are qualified to address epistemic issues, normative judgments should not be left to them. In a democracy, the public should decide which ethical and political values should guide the risk evaluation of technologies. To not allow the populace say in such matters is to disenfranchise them, denying them the opportunity to engage in self-definition about the technologies that shape their existence. That is unacceptable because democracies are premised on the principle that the people have the right to choose the values by which they live.

While we espouse the public’s involvement in deciding which normative concerns should guide the risk assessment of converging technologies, we contend that the form of democratic engagement utilized for that purpose should be mindful of the particulars of the nation in question. The relevant specifics include the distribution of power amongst the various constituencies of that country, the heterogeneity or homogeneity of its populace, etc. To make an argument for our position, we use the U.S. as a case study. We contend that it is because, amongst other things, the U.S. has, first, a deeply rooted culture of political patronage favoring industry, and second, a heterogeneous population constituted of groups that have a complicated political history with one another, not just any form of democracy can effectively serve as the means by which the people can express their will about the values that should guide the risk evaluation of converging technologies. We examine, in turn, the ability of aggregative democracy, representative democracy, and participatory democracy to function as the means by which the American populace could engage in self-definition in the matter of the normative considerations that should shape the risk assessment of converging technologies. We contend given the particulars of the U.S., neither aggregative democracy nor representative democracy is suited for that task. Using that case study, we establish that in order to ensure that the people decide the normative questions that arise during the risk assessment of converging technologies the mode of the public’s participation in the evaluation process will have to be tailored so that it is responsive to the particular political realities of that country.

 

A REEXAMINATION OF BIOLOGICAL INFORMATION FROM THE PERSPECTIVE OF PRACTICE

Barton Moffatt

Mississippi State University

Much of the debate surrounding the concept of information in biology centers on the question of whether or not biological systems ‘really’ carry information. The criterion for determining if a system “really” carries information is whether or not there is a principled, theoretical account of information that captures the relevant biological usages. If biological systems do not carry information in this sense, information talk is termed merely heuristic and dismissed as philosophically uninteresting. To date, three proposed theoretical accounts of information—mathematical, causal and teleosemantic—fail to capture the meaning of biologists. Details of other biological practices that utilize informational concepts are often lost because the debate is too focused on one instance of information talk, genetic information, and because biological representations are thought to need a certain kind of theoretical foundation. The problem with this methodology is that it takes the failure of philosophical accounts of information to capture current biological practices as conclusive evidence that informational representations in biology are incoherent. This approach is backwards. A better strategy is to get a clear understanding of biological practice and then to use it to shape our understanding of the philosophical significance of biological information.

In this paper, I shift attention from abstract reasoning about information in the philosophical literature to concrete reasoning about informational models in biology. The current debate pays too little attention to the biologically prominent concept of signal. I develop a contextualized understanding of signaling models in biological practice. I argue that biologists use the concept of signal to model distinct functional roles in biological systems and not in any of the theoretical senses of information found in the current philosophical literature. For cell biologists, a signal causally indicates the state of a system at a given point and is used in the context of a style of functional explanation generally known as ‘causal role function,’ in which a mechanism or entity has a function if its behavior explains a contribution to a capacity of interest.

I support this analysis with an example drawn from cell biology and reframe the debate over the significance of informational terms in biology. The focus on signal recasts the debate by highlighting examples of information talk which are central to active research programs in biology. The advantage of looking at these models is that their centrality to biological practice forces us to reconsider the adequacy of a methodology that dismisses biological models because we lack a particular kind of philosophical understanding of them. Standard philosophical accounts of information rely on assumptions appropriate for the needs of philosophers but are ill-suited for capturing biological practice. A contextualized understanding of the role of signal in biological practice allows us to work out from the details of practice to tackle broader philosophical issues. On this view, the significance of information talk in biology hinges more on our understanding of how biologists represent function than on our understanding of philosophical accounts of information.

 

CELLS THAT COUNT. THE STANDARDIZING OF DIAGNOSTIC TESTS FOR BOVINE MASTITIS.

H. Nederbragt

Utrecht University, the Netherlands

Aim. The aim of this paper is to present two strategies that help to reduce epistemological problems that are inherent to diagnostic reasoning.

Clinical background. The diagnostic test to be discussed is the somatic cell count (SCC), the determination of the number of white blood cells in milk which may indicate the presence of inflammation of the udder (mastitis) in cows. For several decades SCC has been the gold standard, i.e. a reference test that is supposed to determine a disease state unambiguously. The definition of the power of of a diagnostic test in terms of sensitivity and specificity by epidemiologists leads to the problem of calibration of a new test: it will always have a lower calculated power than the gold standard, although it may be a better test. It may be called “undercalibration”.

Historical background. SCC is a test which is used with varying criteria, for diagnosing the quality of the milk and (the risk of) mastitis of the cow. Technological, economical and commercial developments have, over several decades, led to a continuous re-standardization of SCC and its successors, requiring continuous re-thinking of what is a good diagnostic test result for mastitis.

First problem. A diagnostic test may be considered as a tool to generate evidence for a theory of a disease of an individual. Such a theory is underdetermined since all tests have a chance of not diagnosing the disease although it is present, or of diagnosing the disease although it is absent. Multiple derivability (MD) may be used to approach the underdetermination problem. It is the strategy in which two or more theoretically and methodologically independent tests are used to inductively infer the same theory. In the mastitis case bacteria may be cultured and identified from the milk. SCC may be increased for other reasons than mastitis and bacteria may be present without causing mastitis, but positive tests for both may make the mastitis diagnosis more robust.

Second problem. “Undercalibration” is illustrated by the introduction of electrical conductivity (EC) in robot milking to replace SCC. Using SCC as the gold standard, the performance of EC is poor. Nevertheless, it is used now as a routine because it is easy and cheap. Farmers use EC as a diagnostic test and correct underdetermination by what I call “weighing evidence against context”, i.e. they take factors unrelated to the test (e.g. age of the cow) into consideration to decide whether the test is reliable for taking decisions; if so they may use MD by considering EC as a first diagnostic screening, justifying a second test to obtain a more robust diagnosis.

 

A STORY ABOUT GAUGE POTENTIALS, HOLONOMIES AND TIME

Antigone M. Nounou

University of Minnesota

The purpose of this paper is two-fold. I will show that epistemological concerns have guided the work of both scientists and philosophers who have worked on the foundations of semi-classical electromagnetism; and I will argue that recent theoretical work, which removes one by one the approximations of previous work, challenges the most ambitious of the currently available interpretations of the theory, namely, the holonomy interpretation.

The interpretation of Gauge Theories in general, and semi-classical electromagnetism in particular, has troubled physicists and philosophers of physics since the discovery of the Aharonov-Bohm effect in 1959. In a nutshell, the effect caused a stir in the communities of both physicists and philosophers of physics because it showed that electromagnetic fields could not have caused it, unless one accepted unmediated action at a distance. Since it is expected that interpreted theories ought to provide causal explanations that accord with special relativistic tenets, the theory should be reinterpreted and additional mathematical entities should be attributed a physical and causal status. Reinterpreting electromagnetism proved a challenging enterprise, though, because gauge potentials, the theoretical entities that were immediately thought to be causally responsible for the effect, are epistemically inaccessible in the sense that they are in principle unobservable.

Mathematically, gauge potentials are a kind of field that is predicated over space-time points. But it is not uniquely determined, and at any space-time point we have the theoretical freedom to choose a value from among infinitely many. This freedom, called gauge freedom, not only denies the possibility for direct evidence of their existence, but also, it gives rise to semantic problems; we cannot know what the exact value of gauge properties is at a given space-time point. Physicists’ aversion to the proposed interpretation stemmed from the unobservability of potentials. Philosophers’ arguments against it drew also from the semantic problems. As a result, a different interpretation was sought and found, the so-called holonomy interpretation.

According to the holonomy interpretation, the mathematical entity that causes the effect is an extended object whose properties are distributed non-separably over loops in space-time. Despite their non-separability, holonomy properties are uniquely determined because gauge freedom is removed, and their value, which constitutes a measure of the effect, is equal to the value of magnetic field flux. Thus, holonomies are epistemically accessible, even if indirectly, and cause no semantic problems.

The holonomy interpretation, however, is based on a formulation of the Aharonov-Bohm effect that relies on a series of approximations and idealizations. It comes as no surprise, therefore, that problems surface when certain of these approximations and idealizations are thrust aside; in particular, when the temporal dimension is taken into account. There are two ways in which time re-appears in the picture: by considering complete solutions to the original problem, where the magnetic flux is static, and by examining the effects of time dependent magnetic fluxes (TDMFs). The scientific argument, advanced from the assessment of the effects of TDMFs, poses a challenge to the holonomy interpretation. This challenge comes from an epistemic quarter: the measure of the effect is no longer equal to the (indirectly) observable magnetic flux. In addition, I will argue, from a philosophical viewpoint both cases show the causal picture depicted by the holonomy interpretation to be incomplete, if not mortally wounded.

 

ON THE USE AND ASSESSMENT OF MODELS: FORGET ABOUT REPRESENTATION

Isabelle Peschard

San Francisco State University

Much emphasis has been put recently on the representational function of scientific models. This emphasis on representation leads to an emphasis on the use of models, since being a representation is being used to represent, and to an assessment of the epistemic value of models solely in terms of the relation between a model and what it is viewed as representing, for instance in terms of structural similarity or inferential capacity.

But that the main use of models in science is to represent is a misapprehension, resulting from considering models in abstraction from the epistemic space in which they are actually used. In scientific practice, representing is not what models are mainly used for. Consequently, the source of their epistemic value lies elsewhere than in the relation that a model bears to what it is a model of.

Modeling activity is the activity of model construction; what commonly provides the starting point of this construction is nothing other than models. So what models are mainly used for is, in fact, to construct models. They are used, by being transformed or by serving as analogy, to construct further models that hopefully will help elucidate important problems. An important, if not the most important, problem in fluid mechanics, for instance, is the development of turbulence and the process through which a system goes from a predictable behavior to a turbulent one. As illustration, I will consider the case of the construction of a model of 2 coupled wakes developing behind two short cylinders, and show how it was used to construct a model of 16 such coupled wakes, itself expected to serve as a template for a model of a large number of such coupled wakes, expected to reveal some fundamental aspects of the mechanism of development of turbulence in the wake formed behind a cylinder of infinite length. The row of a large number of coupled wakes was seen as a discrete version of the continuous row of coupled fluid oscillators arranged along the infinite cylinder and suspected to be instrumental in the development of turbulence. Increasing progressively the dimension along which the coupling occurs, by increasing progressively the number of wakes, and being able to control the effect of an increasing coupling intensity, by varying the distance between the wakes, was viewed as a possible means to get some insight into the development of turbulence, legendarily opaque and incontrollable.

Evidently, to think of the epistemic value of the model of a wake behind a short cylinder, used initially to construct the model of two coupled wakes, solely in terms of the relation between the model and what it is a model of, is to miss the point. Its epistemic value comes from the role this model plays in a network of related activities directed at elucidating some problems viewed as fundamental. A good model is a model that makes a positive difference for the development of this network by opening up new possibilities of fruitful investigation. To understand why a model is regarded as a good model we have to understand what difference it makes and why this difference is regarded as positive.

 

HYBRID VALUES IN EPISTEMIC AND NON-EPISTEMIC PRACTICES

Elizabeth Potter

Mills College

I suggest that we recognize some values to be hybrids, e.g. both epistemic and moral. Analysis of epistemic and non-epistemic values can be undertaken through attention to activities of valuing and evaluating using social practice as the unit of analysis. If we think of human activities as a universe of diachronic, overlapping social practices, examination of different kinds of practice suggests that while some kinds of practice might be discrete and have clear boundaries, this is not true of all. We find, instead, that many practices of different kinds in fact overlap. Whether and to what extent moral and epistemic practices overlap has been an issue in philosophy, although not articulated in this way. A new analytical approach to this issue will be useful, viz. analysis of our activities of evaluating and valuing in both scientific and moral practices and investigation of the ways in which these practices and their constituent evaluative activities overlap.

Most discursive models used in the epistemology of science assume a distinction between epistemic (cognitive) value judgments and non-epistemic value judgments and most challenges to this assumption are made within the same discursive framework. And within a discursive framework many have found it appropriate to analyze values as, in the first instance, objects of attitudes, whether propositional (e.g., intentions) or non-propositional (cf. Anderson). Adopting a practice model allows us to analyze values in terms of activities of valuing and evaluating which, in part, constitute certain social practices. When we do so, we can take a less reified and more naturalized view of values as, in the first instance, epiphenomenal upon these social practices.

We will find that, although we identify particular values more often in some practices than in others, we can sometimes locate them at the intersection of overlapping practices, e.g. epistemic practices and moral practices. I suggest that some of these values are hybrids, i.e. both epistemic and moral at once. Currently, there is debate on a related issue, viz. whether virtues such as trust are, in a particular context like scientific investigation, epistemic or moral. The prevailing assumption is that such a virtue cannot be both (to which Fricker’s analysis of hybrid virtues is an important exception). My paper does not treat epistemic and moral virtues, but instead, their close kindred, values. With attention to two cases, I argue against the assumption that good epistemic practices always have clear boundaries separating them from moral practices. In making the argument, I draw support from the view of values mentioned above as well as from my (more naturalized) understanding of social practices in general and practices of evaluating and valuing in particular. Thus, having shown that practices of different kinds can overlap, including epistemic and moral practices, I suggest that in some cases, the valuing and evaluating activities are epistemic and moral at once. And when this is so, I argue that the values we identify in the overlapping activities are hybrid values.

 

STYLES OF REASONING IN MATHEMATICS: THE CASE OF RICHARD DEDEKIND

Erich H. Reck

University of California at Riverside

In his well-known discussion of reasoning styles, Ian Hacking mentions several main examples, such as the experimental style that emerged in early modern natural science and the statistical-probabilistic style of the nineteenth century social sciences. Most of Hacking's examples come, in fact, from the natural and social sciences. There is one that has to do with mathematics, however: the "postulational" reasoning style characteristic of Ancient Greek mathematics. In this talk I will argue that, if one attends closely to the development of mathematical practice, including some revolutionary changes in it, it is possible to distinguish several different styles of reasoning within mathematics.

As an illustration, and because of its intrinsic interest, I will discuss the case of Richard Dedekind and the radical transformation of modern mathematics during the nineteenth century to which he contributed. In connection with that transformation, some commentators have talked about a "second birth" of mathematics (Howard Stein), as well as of a "revolution in mathematical ontology" (Jeremy Gray). I will attempt to show that, more generally, one can find all the major characteristics of a novel reasoning style, as conceived of by Hacking, in this connection: new types of objects, of "candidates for truth and falsehood", of evidence, of laws, of classifications, and of explanation.

Central for my purposes will be the fact that mathematicians are often not just interested in establishing, deductively, that a certain result is true, but also in understanding why it is true, or in explaining which basic features "make it true" (which then allows the result to be generalized, transferred to other cases, etc.). While all mathematical proofs are deductive, it is with respect to this additional task, as part of mathematical practice, that a variety of reasoning styles can be differentiated. In particular, Dedekind's main contribution was to introduce a style that can be called "conceptual" or "structural", in contrast to the more computational, constructivist style of most of his contemporaries.

My consideration of Dedekind's characteristic reasoning style, which shaped much of twentieth-century mathematics, can be seen as a contribution to several endeavors: (i) the project of analyzing more deeply the radical transformation of mathematics in the nineteenth century; (ii) the project of establishing the applicability to mathematics of the notions of explanation and understanding; (iii) the project of adding a new dimension to current discussions about "structuralism" in the philosophy of mathematics; but also (iv), the project—most relevant for this conference—of further exploring the strengths and weaknesses of the notion of style of reasoning as applied to scientific practice.

 

CROSSING BOUNDARIES: CONTEXTS OF PRACTICE AS COMMON GOODS

William Rehg

Saint Louis University

In the literature on scientific practices, one finds sustained analyses of the contextualist elements of inquiry. However, the ways in which local and disciplinary contexts of practice function as common goods remains largely unexplored. In this paper I argue that a contextualist analysis of scientific practices as common goods can shed light on the challenges of scientific communication and interdisciplinary collaboration. I argue as follows:

1. I begin with Kuhn’s notion of a paradigm. As a number of critics have pointed out, Kuhn’s fixation on incommensurability had unfortunate consequences: he associated his analysis with a problematic meaning-holism that directed attention away from a more significant insight into the practical character of science. Paradigms, that is, are tied to practices, specific ways of doing science that cannot be reduced to explicit rules. Because these practices realize epistemic values, they are experienced by members as ways of doing good science; because such values are open-textured, their determinate meaning and force partly depend on the lived experience of fruitful scientific practice. I thus propose that the difficulties of communication that Kuhn wanted to get at are better described as obstacles to communicating the goodness of a specific research practice, experienced by members as a common good.

2. In the second section I further clarify the idea of a common good as it applies to scientific practices: in what sense(s) are such practices both good and common? I argue that scientific practices are not merely instrumentally good (for the production of public knowledge), but also involve irreducibly social excellences realized in the practice itself. I develop this point with some concrete examples.

3. The common good of lived practice sets a rhetorical challenge for scientists. Insofar as the propositional and symbolic elements of science draw their force from member’s lived experience of their experimental practices, the cogency of evidential arguments contains an indexical component that resists communication to outsiders. What is more, commitment to a specific practice typically involves an element of hope, in effect a tacit claim that a particular research project will yield fruit. Whence the rhetorical challenge: how does one overcome outsiders’ indifference and show that one’s local common good has relevance for wider contexts? In the third section, (a) I delineate the standard ways that scientists meet this challenge, for example by appealing to cross-contextual commonalities in the construction of journal articles; (b) I then test my proposal against some case studies of successful and unsuccessful communication. These analyses have to do with the public character of science, understood as a function of the ability of scientific arguments to legitimately “travel” across contexts.

4. I conclude with some brief reflections on implications of this proposal for understanding the following aspects of scientific practices: (a) the social and value-laden character of scientific practice and discourse; (b) questions of collective action and intentionality in science; (c) problems affecting interdisciplinary work and expert committees.

 

WHY A NURSE KNOWS BETTER: STANDPOINT EPISTEMOLOGY AND NURSING SCIENCE

Mark Risjord
Emory University

According to standpoint epistemology, certain social positions have a kind of epistemic privilege. The interest of standpoint epistemology to the philosophy of science lies in the way it relates knowledge to moral and political value, and in the way it makes knowledge depend on social role. However, it has some important limitations that keep the lessons of standpoint epistemology from being more generally applied. First, even in its more sophisticated formulations, standpoint epistemology is most naturally applied to knowledge of the social world (race, class, gender, forms of oppression, etc.). It has been difficult to show how knowledge of the natural world might depend on social status. Second, the standard discussions treat only class, race, and gender as candidates for epistemic standpoints.

The first section of this essay argues that one standard analysis of epistemic standpoints—Nancy Hartsock's generalization of Marx in (Hartsock 1983)—can be extended to any complementary pair of social roles that meet the same conditions. It follows from this analysis that the professional role of a nurse is an epistemic standpoint. While the historical association between a woman’s social status and the role of a nurse makes the fit a natural one, this essay will argue that the epistemic privilege of the nursing role is independent of gender, class, or race. Moreover, nursing makes visible aspects of health and health care that would otherwise be invisible to the health sciences. This extension of standpoint epistemology to nursing thus shows how there can be privileged perspectives on a domain that is not strictly social.

The second section of the essay develops the idea of a nursing standpoint by discussing a recent study of physician-nurse relationships, Maureen Coombs' Power and Conflict Between Doctors and Nurses (Coombs 2004). This research suggests that in spite of significant changes in physician's attitudes, there are crucial asymmetries in the way that doctors and nurses think about and respond to the patient. The power and communication dynamics that Coombs describes show that, in at least some situations, the physician-nurse relationship fits standpoint epistemology's model of epistemic privilege. It also suggests specific ways in which these asymmetric social roles create differences in what is known. Coombs evidence thus makes a prima facie case that there is an epistemically privileged nursing standpoint on health.

The final section of the essay will briefly reflect on the larger consequences of this analysis for the discipline of nursing. Under the influence of classical views in the philosophy of science, nurse scholars have framed the discipline of nursing as a basic science. This has forged a gap between “nursing theory” and the practice of professional nurses. Thinking of nursing science as developing knowledge available from the nursing standpoint breaks down the applied/basic science distinction. In so doing, it may provide resources for closing the theory-practice gap.

 

FROM HACKINGS PLURALITY OF STYLES OF SCIENTIFIC REASONING TO « FOLIATED » PLURALISM, A NEW FORM OF ONTOLOGICO-METHODOLOGICAL PLURALISM

Stéphanie Ruphy

Université de Provence

Philosophical discussions of scientific methodology tend to consider separately two of its major aspects, to wit, its heuristic aspect – how do scientists find out about how the world works ?– and its logical or justificatory aspect – how does a scientific result get to be justified ? No such separation is to be found in Ian Hacking’s concept of « style of scientific reasoning », built on A. C. Crombie’s historical analyses of the existence of various styles of scientific thinking in the Western tradition. This is one of the several interesting (and sometimes challenging) features of Hacking’s concept that I plan to discuss in this paper. I will be interested in particular in the various kinds of pluralism that follow from the coexistence of several styles of reasoning in contemporary scientific practice (the statistical style, the laboratory style, the historical-genetic style, etc.). For this coexistence not only shows that there is more than one way to find out about the world, but it also suggests, given Hacking’s caracterization of styles as « self-authenticating », that there is more than one standard of rationality. Actually, there would be as many as there are styles, since each style brings into being new standards of reason, along with new types of propositions that are candidate for being true or false. I will first discuss several issues raised by this form of pluralism. What kind of theory of truth is compatible with it ? Can the respective merits of various styles be assessed ? What is exactly the nature of Hacking’s relativism ?

My focus will then be on the ontological import of the existence of several styles, knowing that each style creates new kinds of objects. How should one interpret the constructivist dimension of this internalist ontological claim ? And, more importantly to get insights on actual scientific practice, how do the objects created by a style articulate to the mundane scientific objects that are not internal to a style. I will suggest to conceive this articulation in terms of « ontological enrichment »: objects created by a style (the class of eruptive variable stars of type UG-Z Cam, electrons, the species Canis lupus, a population characterized by its mean and dispersion) do not simply add further to the bestiary of scientific objects, independently of the already existing objects (stars, electrical phenomena, dogs, populations). I’ll explain how they should rather be conceived as enriching these mundane objects ontologically, in particular by extending the class of propositions about them having truth values.

I will finally argue that the pluralism that results from these processes of enrichment of objects qua scientific objects (and that I will dubb « foliated » pluralism) captures some essential features of contemporary scientific practice that are ignored by more traditional forms of « patchwork » pluralism, that is, forms of ontologico-methodological pluralism based on the idea that the domain of science can be carved in various kinds of objects (physical particles, living organisms, human societies, etc.) calling for specific methods of inquiry and the subject of distinct disciplines.

 

EPISTEMOLOGY UNDER PRESSURE: SACRIFICING KNOWLEDGE TO KEEP BIG PHARMA UNDER CONTROL

Elizabeth Silver

University of Melbourne

Practical problems arise whenever the pure philosophy of science is applied to real situations - even that triumph of the scientific method, the randomised controlled clinical trial. I consider the intersection of two problems: patient non-adherence to the prescribed dosing regimen, and the pharmaceutical industry's influence on medical research.

Patient non-adherence is considered a nuisance in trials, because it dilutes the difference between the treatment and control groups. Since the 1990s some epidemiologists have argued that non-adherence is actually an opportunity: it creates a natural experiment to test the effects of lower doses than that prescribed. Such data is observational rather than experimental, but that is no excuse for wasting it. Adherence-based analyses could reveal much about the effects of drugs (including side effects). However, this data is standardly wasted, in part because of efforts to counter the second pragmatic problem: the influence of large pharmaceutical companies (a.k.a. Big Pharma).

By and large, the philosophy of science considers honest scientists. When the integrity of a whole discipline becomes compromised, philosophy advises us to replicate the studies independently (just as, e.g., creation science can be compared to evolutionary biology). But pharmaceutical trials are far too expensive for most to be replicated. The industry provides a large portion of the funding for these trials, and there is plenty of evidence that this conflict of interest produces biased results, at great cost to the consumers of that evidence.

If Big Pharma were allowed to use adherence-based analyses as evidence of their products' efficacy (in addition to or instead of standard intention-to-treat analyses), they would have much more freedom to cherry-pick the most favourable analysis. For this reason, the FDA and the major medical journals do not allow companies to use adherence-based analyses as the primary measure of efficacy. There are many areas of trial design where Big Pharma are getting away with murder, but this is not one of them.

However, this protection comes at a price: adherence-based analyses are done poorly, reported rarely, and regarded as peripheral. The vast majority of adherence data is wasted. So for most products, certain questions that matter to patients are never answered - including, 'What will probably happen if I take the drug exactly as prescribed?' and, 'What if I miss a dose, or several?' Furthermore, this loss is hidden; consumers assume that the trial results apply to people who follow the regimen, rather than the average of variously-adherent patients. Also, non-adherence is still a nuisance, so trialists often select highly adherent patients for their studies, reducing the external validity of the results.

In other words, we pay a price in lost information for every ounce of protection from bias. More importantly, the current trade-off may not be the epistemologically optimal price: there may be other ways to restrict trial analysts' freedom while still encouraging them to perform and report good adherence-based analyses. This case study illustrates an epistemological dilemma that arises from real-world pressures rarely considered by philosophers of science.

 

THREE NEW PARADIGMS IN MEDICAL EPISTEMOLOGY

Miriam Solomon

Temple University

Over the last fifty years, three new paradigms have developed in medical epistemology. The traditional practices of clinical judgment and causal scientific reasoning have been supplemented with Evidence-Based Medicine (including the techniques of randomized controlled trials, formal decision sciences, systematic evidence review and meta-analysis), Expert Consensus (often attained in Consensus Conferences) and Narrative Medicine.

Each of these epistemological paradigms has unsettled methodological questions of its own, as well as questions about how it relates to the other paradigms. This paper will especially focus on some epistemic difficulties created when the paradigms disagree. The ways in which these difficulties have been resolved reveal more about the epistemic situation, even though there is not, nor should there be, a general “meta-level” or privileged normative perspective.

For example, the discussion of particular medical cases can conflict with the recommendations of evidence-based medicine. Some argue that the results of meta-analysis do not give enough guidance for clinical decisions involving individual patients, and that evidence-based medicine must be supplemented with clinical judgment and/or narrative explorations. Some question the reliability of Consensus Conferences, reflecting on the many biases in the process of consensus formation, and prefer such conferences to take place after a formal evidence report is produced, yet find no better way to fill in gaps in medical knowledge and to disseminate medical knowledge. And narrative medicine has patient care goals that do not correspond to the outcome measures used in evidence-based medicine.

Some of this controversy is expressed as a “medicine is a science” versus “medicine is an art” discussion, but I argue that this is an oversimplified and ultimately unhelpful characterization. Recent work in philosophy of science has shown that scientific work (in other sciences as well as in medicine) includes narrative and metaphorical reasoning, appeals to expertise, imagination and empathy, and different kinds of evidence. The controversies are interdisciplinary, but not usefully characterized in terms of the traditional sciences versus humanities dichotomy or the natural/social sciences dichotomy.

 

ON SECURING THE TRUSTWORTHINESS OF EVIDENCE: MODELING THE GLOBAL SURFACE TEMPERATURE DATA

Aris Spanos

Virginia Tech

The primary objective of this paper is to discuss the potential relevance of philosophy of science (knowledge/evidence) in making progress with problems about regulation-relevant evidence, when proper attention is paid to the adequacy of statistical modeling and inference.

Disagreements about evidence often stem from different choices concerning:

(a) the relevant scientific knowledge pertaining to the issues of interest,

(b) the collection and compilation of the relevant data,

(c) the statistical modeling (the basic statistical model),

(d) the statistical analysis and inference (estimation, testing, prediction), and

(e) the framing of reliable inference results in the form of regulation-relevant trustworthy evidence.

The paper focuses on (c)-(e) pertaining to the statistical aspects of modeling the annual average global surface temperature data going back to 1856, arguing that, contrary to conventional wisdom, there is a very narrow margin of tolerance for these choices when due attention is paid to the reliability and precision of the resulting inference. In particular, the choice of the statistical model in the context of which the statistical analysis and inference will take place needs to satisfy a rather stringent criterion known as statistical adequacy: the model's probabilistic assumptions are valid for the particular data; see Mayo and Spanos (2004). Securing statistical adequacy is a non-trivial problem in practice, but without it the reliability of inference will be severely undermined.

Given a statistically adequate model, ‘optimality’ will determine the most effective statistical inference procedures one can employ to answer the substantive questions of interest in a way that ensures both the reliability and precision of the resulting inference. In addition, the framing of the inference results needs to satisfy certain epistemic principles for securing the trustworthiness of the evidence pertaining to the substantive questions of interest.

Unfortunately, there is widespread confusion in applied research concerning the use and abuse of frequentist statistical tools, such as p-values, p-value curves, confidence intervals (CI) and statistical significance. These confusions include (see Mayo and Spanos, 2006):

(i) the fallacies of acceptance and rejection, and

(ii) (mis-)interpretations and flawed rules of thumb associated with observed CIs.

The paper reconsiders the statistical modeling of the annual average air surface temperature during the period 1856-2007, in the context of the Error Statistical approach, paying particular attention to the statistical adequacy of the underlying statistical model using thorough Mis-Specification testing and respecification strategies; see Spanos (2007). It is argued that the current evidence in the Intergovernmental Panel on Climate Change (IPCC) Report (2007) need to be reexamined in light of the fact that their underlying statistical models are misspecified. In particular, when a statistically adequate model is used as a basis for inference, the evidence pertaining to the substantive questions of interest indicates that the global warming problem appears to be a lot more serious than the IPCC report suggests!

The paper also argues that a statistically adequate model can provide a sound basis for reliably probing the multitude of potential explanatory factors, and thus reliably constrain the search for establishing adequate substantive explanations for global warming.

References

·

IPCC (2007), Climate Change 2007 - The Physical Science Basis: Working Group I Contribution to the Fourth Assessment Report of the IPCC, Cambridge University Press, Cambridge.

·

Mayo, D. G. and Spanos, A. (2004), “Methodology in Practice: Statistical Misspecification Testing,” Philosophy of Science, 71: 1007–25.

·

Mayo, D. G. and Spanos, A. (2006), “Severe testing as a basic concept in a Neyman–Pearson philosophy of induction,” British Journal for the Philosophy of Science, 57, 323–57.

·

Spanos, A. (2007), "Curve-Fitting, the Reliability of Inductive Inference and the Error-StatisticalApproach," Philosophy of Science, 74: 1046–1066.

 

INTERNALIST AND EXTERNALIST ASPECTS OF JUSTIFICATION IN SCIENTIFIC INQUIRY

Kent Staley & Aaron Cobb

Saint Louis University

Contemporary epistemologists have devoted considerable attention to conceptual analyses of the nature of epistemic justification but there is great disagreement about whether the factors relevant to the justification of a person’s belief must be internally accessible to that person (Alston 1989; Fumerton 1996; Kornblith 2001; BonJour and Sosa 2003; and McGrew and McGrew 2006). This paper focuses on the scientific practices directed at justifying experimental conclusions and what they could reveal about this debate as it concerns scientific inquiry. Although important theories of evidence suggest that internal accessibility is not required for scientific justification, it seems that numerous justificatory practices in the sciences are best understood from an internalist perspective. We seek to resolve this tension by analyzing a prominent theory of evidence—Deborah Mayo’s error-statistical account—and by considering a widespread and well-documented argumentative practice—appeals to robustness.

In order to make this dispute relevant to justificatory practices in the sciences, however, we argue for a shift away from beliefs as the relevant epistemic category (cf. Baird 2004; Pitt 2005). While beliefs may play an important explanatory role in understanding the actions of scientists, scientific knowledge-production consists in acts of assertion through preprints, publications, presentations, decisions taken in collaboration meetings, etc.. Thus the debate shifts simultaneously from a concern about an individual’s grounds for belief to a focus on socially-situated epistemic practices. Hence, justification in the sciences is closely tied to the demand by scientific communities to show that an assertion is supported by evidence. A theory of evidence can be understood, in part, as an attempt to explicate a concept of scientific justification.

Deborah Mayo’s error-statistical theory of evidence is a theory of this kind (Mayo 1996; Mayo and Spanos 2006). On Mayo’s account, evidential relations are made to depend on properties of testing procedures, such as error-rates, that hold independently of investigator’s beliefs. Thus, it seems that evidential relations do not depend upon internal accessibility. But error-statistical justification rests not merely upon the use of testing procedures that are in fact reliable but on those which can be shown, through such practices as misspecification-testing (Spanos 1999; Mayo and Spanos 2004), to have the necessary statistical properties. Thus, for evidential claims advanced as contributions to socially-organized scientific inquiry, at least some justifying reasons must be internally accessible.

Furthermore, the practice of appealing to robustness (Levins 1966; Wimsatt 1981) suggests that the justification of empirical claims in science must incorporate an internalist element. An argument from robustness in support of a claim H proceeds from the premise that several, at least partly independent, tests or sources of evidence, all support H. Philosophers of science have discussed such arguments encountered in such diverse literatures as climate modeling (Parker 2006), population biology (Levins 1966; Weisberg 2006), cell biology (Culp 1994), and particle physics (Staley 2004; 2008).

We argue that these appeals are best understood as attempts to secure evidence claims, where securing an evidence claim is understood to consist in ruling out epistemically possible scenarios in which claims of evidence in support of a hypothesis fail due to a reliance on false auxiliary assumptions. By considering patterns of robustness argumentation in numerous examples involving collaborations, we postulate that the notion of epistemic possibility at work is concerned with what is possible given the state of knowledge of the research group, reflecting again our shift away from an individualistic epistemology to a social epistemology of science. As a corollary, the access requirement of internalism has to be reformulated in non-individualistic terms.

Securing evidential claims or inferences requires the consideration of possible scenarios that would entail the falsehood of relevant assumptions. As such, security requires that the reasons ruling these scenarios out are accessible to the scientific community—an internalist requirement. Nonetheless, security is not a strictly internalist notion insofar as attempts to secure evidence can fail for reasons that are not accessible to investigators. This suggests that the internalism-externalism debate, typically understood as a dichotomy, must be reformulated in order to make sense of these justificatory practices.

 

REVISITING ONTOLOGY AND ITS CONSEQUENCES

Sarah Star

University College London

In 1969, Quine enjoined philosophers to leave matters of ontology to the determination of scientists with the suggestion that a thorough-going naturalism – a philosophical position he considered to be necessitated by the empirical successes of the natural sciences – can recognise “no place for a prior philosophy.” Implicit in Quine’s view are the suppositions that the sciences themselves engage in ontology and that, ultimately, they are better equipped than philosophy to undertake such work. As a consequence of Quine’s injunction, and the wide-spread deference with which it was met, a hierarchy of influence has come to be instituted and broadly accepted within philosophy of science in which, science is understood to defer to nature and philosophy to science.

Drawing on the late works of Maurice Merleau-Ponty, this paper will challenge Quine’s characterisation of scientific work as intrinsically ontological in orientation and will advocate a renewed philosophical interest in ontology. It will suggest the scientist is motivated less by ontological concerns, than by the desire to gain a foothold and intervene and that, as such, her ontology is likely to be uncritical, objectivist, and somewhat ad hoc, thereby providing less than ideal working conditions for the philosopher concerned to understand how scientific knowledge is built and substantiated. By contrast, it will be the position of this paper that phenomenologically-oriented ontological models, founded in evidence drawn from scientists’ actual practice, will offer the valuable prospect of an important corrective for the scientist’s ontology, and, in so doing, will suggest fruitful ground upon which philosophers and scientists might constructively engage with one another.

In an effort to demonstrate these ideas more concretely, the bulk of the paper is taken up with a thought experiment conducted in three acts. In each case, an ontological understanding is sketched and its consequences for our conceptions of scientific and philosophic practice are briefly explored. Act one engages the classic understanding of a Kosmotheoros in confrontation with a World as Object; act two takes more seriously the consequences of our embodiment, recognising the limitations it introduces and the natal bond it seems automatically to afford us with the world; act three retains this recognition of both our embodiment and our embeddedness within a natural world, while also acknowledging our implantation within certain instituted social and cultural settings that are both constituted by and constitutive of us as living beings.

With the introduction of each new ontological model, our conceptions surrounding the nature of scientific knowledge and practice will be made to shift, as will our understanding of our own philosophical practice. In adopting an increasingly realistic sense of our position with respect to the world, we will find that we move from what I suggest is the untenable image of science as ontology and philosophy as verification and legitimation to a recognition of science as a kind of specialised culture and philosophy as a sort of poetry or literature. Finally, we will find that Quine’s linear and unidirectional hierarchy is forced to give way to a more hermeneutic conception.

 

SIFTING SOUND SCIENCE FROM SNAKE-OIL: IN SEARCH OF DEMARCATION CRITERIA FOR SCIENCE AS ACTUALLY PRACTICED.

Janet D. Stemwedel

San José State University

Karl Popper’s attempt at a demarcation criterion to distinguish science from non-science is famously problematic, at least if we take actual scientific practice seriously. Nonetheless, Popperian resources can seem better than nothing when it is important to establish what counts as legitimate scientific expertise or to discern which judgments our “best scientific evidence” supports. Torn between holding science to a logically defensible yet nearly impossible standard, and regarding science as nothing more than “what scientists do,” the public has an urgent need to sift sound science from snake-oil on pressing problems, from product safety to education to the habitability of our planet. Though sorting good science from bad seems clear in retrospect, a public struggling to apply scientifically informed opinions to decision-making can seldom afford to wait until a scientific question has been answered once and for all.

In this paper, I examine why specifying the criteria for good science is so difficult, even in a world where good scientific information is increasingly important. Building scientific knowledge involves examining the world in a frontier region, where it’s hard to judge whether results are good or faulty, hard to distinguish data from noise, because you don’t know what to expect and you haven’t yet observed what there is to observe. Thus, it is not easy to line up hypotheses, draw clear consequences, seek potentially falsifying observable outcomes, and distinguish legitimate falsifications from failures of experimental conditions or theorizing. There are good stretches of scientific activity that look (and feel) like flailing. Rather than being pathological, these could be utterly necessary to get to the periods of scientific activity that fit the philosophers' accounts.

Given how pressing it is for scientists and non-scientists alike to be able to recognize good science from the alternatives, I explore other strategies for making this recognition. Building on Popper’s core intuition, that good science exposes itself to conditions that could reveal error, I consider how this scientific commitment guides the scientist’s interactions with the phenomena and with other members of the scientific community and the knowledge claims they generate. I argue that good science manifests itself in its level of engagement with conflicting results and countervailing hunches, reflecting both rigor in inferences about the world and serious participation in the community’s judgments of the credibility of those inferences.

I also consider how pictures of science that draw the boundaries clearly but gloss over the messy features of actual scientific practice may have pernicious effects. Overly neat demarcation criteria, once enshrined in jurisprudence or the public understanding of science, expose practice that scientists themselves view as legitimate to classification as non-scientists. Unless scientists are committed to hiding their actual practice from non-scientists, such idealized definitions might result in their rejection of philosophy of science as a legitimate intellectual pursuit (since the philosophers’ idealized science departs significantly from science as practiced). Such a rift could put philosophers and scientists alike off the important project of understanding how, despite the messiness and ambiguities, scientific activity builds a body of reliable knowledge.