______________________________
Scott S. Emerson, M.D., Ph.D.
Professor Emeritus of Biostatistics
University of Washington
(V) 206-459-5213 semerson@uw.edu
7055 54th Avenue NE, Seattle WA 98115
Dear Professor Emerson,
I have long been following your career, particularly your publications on issues in non-proportional hazards in group sequential designs.
You made some comments at Thursday afternoon’s estimands session that I may simply have misunderstood.
As I understood it, you remarked that many questions posed about clinical outcomes are unscientific questions, and they are unscientific because they can’t be reliably answered.
My view has always been that good science involves trying to understand both ones real problems – which are often complex and unsolvable, tractable only if we make simplifying assumptions that never completely hold – and ones tools, which are often tractable only because they make simplifying assumptions that don’t fully address the actual problem at hand.
So in my view, good science often involves identifying questions that are hard to answer with the available tools. Our situation is often like that of the fabled Wise Men of Chelm, who lost their keys in a dark alley, but are looking for them under a lamppost because it’s so much easier to see there. Positing that ones keys are in the dark alley is not bad science. Looking for ones keys under the lamp-post because that’s where it’s most reliable to look is not good science. And this is so even if it really is impossible to look in the dark alley at the moment. It is always possible that hypothesizing that the real problem lies in the dark alley and not under the lamppost may lead someone to invent the equivalent of a flashlight and hence the ability to find out reliably.
I note this because one of my difficulties with statistics as a profession has been that it often operates like a branch of mathematics rather than a science. It all too often operates deductively, starting with a set of tools and seeing what can be done with them, much as mathematics starts with a set of axioms and seeing what can be produced. Science operates inductively, starting with simple observation, and proceeding from simple description to developing and testing causal theories.
So from this view, focusing on whether a problem can be reliably solved as the primary criterion not only doesn’t define what science is, I’m not even sure it’s good science. (The term “Wise” in stories about the Wise Men of Chelm was used somewhat ironically). Our real problems will often be hard and sometimes impossible to solve. The applied statistician often has to take the simple tool that most closely approximates the complex problem at hand, and do so with eyes open – trying to understand both the real problem and the limitations of the tool.
In my talk on estimands in JSM 2019, I approached this issue by discussing a feedback loop between goals and feasible solutions analogous to the Deming Cycle. I believe clinical research often requires acknowledging that available solutions are rarely perfect fits for the real problems. But rather than deductively treating the available solutions as the given and characterizing misfits -- real problems for which there are no good solutions available as “bad science” -- the better course would be to treat the problems as the given, describe them accurately, see what available solutions most closely fit them, and then describe the problems that the available solutions are actually solving accurately, acknowledging that this will often be somewhat different from the problem that was posed. I understand that part of any consulting task is helping a client clarify the problem. Clients may not start out being able to state their problem clearly.
I think some humility is in order. Just because we’re statisticians doesn’t mean we’re the ones who understand the real problem. And our present humble and imperfect tools and their limitations are not the boundaries of science. It is always possible that someone will come up with a solution that solves something closer to the real problem at hand than anything available. If we are attentive to both the real problem and the limitations our tools, perhaps we will be the ones who can come up with a better solution. If we can’t, explaining the real problem clearly, even when we can’t solve it, may help someone else devise a solution. And even if that doesn’t happen, it’s still good science to describe and understand what our problems are.
As Oliver Wendall Holmes put it, “I would not give a fig for the simplicity this side of complexity. I would give my right arm for the simplicity on the other side of complexity.” The applied scientist must accept unsolvable complexity as part of science in order to be able to find a path to the simplicity on the other side.
Sincerely,
Sincerely,
Jonathan
Jonathan Siegel
Director, Oncology Clinical Statistics US
_______________________
Bayer: Science For A Better Life
Bayer US
Oncology Clinical Statistics US
100 Bayer Blvd
Building B100, 1B2250
Whippany, NJ 07981
Tel: +1 862-404-3497
Fax: +1 862 404-3131
Mobile: +1 862 309 3347
E-mail: jonathan.siegel@bayer.com
Web: http://www.bayer.com
The information contained in this e-mail is for the exclusive use of the intended recipient(s) and may be confidential, proprietary, and/or legally privileged. Inadvertent disclosure of this message does not constitute a waiver of any privilege. If you receive this message in error, please do not directly or indirectly use, print, copy, forward, or disclose any part of this message. Please also delete this e-mail and all copies and notify the sender. Thank you.