Of Opaque Oracles: Epistemic Dependence on AI in Science Poses No Novel Problems for Social Epistemology
Deep Neural Networks (DNNs) are epistemically opaque in the sense that their inner functioning is often unintelligible to human investigators. Inkeri Koskinen has recently argued that this poses special problems for a widespread view in social epistemology according to which thick normative trust between researchers is necessary to handle opacity: if DNNs are essentially opaque, there simply exists nobody who could be trusted to understand all the aspects a DNN picks up during training. In this paper, I present a counterexample from scientific practice, AlphaFold2. I argue that for epistemic reliance on an opaque system, trust is not necessary, but reliability is. What matters is whether, for a given context, the reliability of a DNN has been compellingly established by empirical means and whether there exist trustable researchers who have performed such evaluations adequately.
Referent/Referentin
Jakob Ortmann
Veranstalter
Centre for Ethics and Law in the Life Sciences (CELLS)
Termin
25. November 202410:15 Uhr - 11:45 Uhr
Ort
Otto-Brenner-Str. 1Geb.: 1930
Raum: 1930.A001
Otto-Brenner-Str. 1
30159 Hannover