Onsite vs. alternative (central, statistical) monitoring BMJ had an excellent where problems were detected by statistical means such as correlation analysis. The trial was deemed suspect upon submission to the journal for publication. Ian Roberts' group carried out analysis. Steven George has written an article in the cancer field about advantages of central statistical monitoring for detecting fraud. Imprecision or missing data is more important an issue than fraud, e.g., failure to obtain test results from outlying hospitals. Often personnel executing a trial are not familiar with clinical trial technology. But there is something particularly unacceptable about the occurence of fraud in a study, even if the real impact is small. Some level of auditing is important to do. A little bit of auditing will cause just enough fear to prevent most fraudulant activity as well as proper patient enrollment, whether an adjudication committee is following the rules, etc. [However some causes of fraud have occurred in sites who knew they would be visited. There is no evidence that more monitoring detects more fraud.] Sometimes the data come in too late to detect a problem and statisticians are not trained in problem-detection analysis methods. Statisticians spend more time on more "glamorous" aspects of analysis such as efficacy analysis. Monitoring visits should be targeted (e.g., at study outset and after central statistical review reveals suspicious data). Monitoring may quickly identify the absence of source documents. With electronic data capture data can become available in real time, well before a monitor could check the source data. On the other hand, when site personnel know a monitoring visit is about to happen, they will work hard to acquire needed data. Often the comfort from knowing that each site is being monitored has led to a lack of central statistical review. Examining outcome data by country, or within multiple sites within country where medical practice is standardized, may be helpful. Sometimes outcomes may not be reported. If is not uncommon to observe real event rates for some sites that are very low however. Monitors do things other than monitoring, including training and site closeout. Monitors might better be called by a more general name. Sometimes the visitor just needs to fax a backlog of forms to study central. NIH sometimes forces a study to bring in independent monitors who are not sufficiently familiar with the study design or conduct. An advantage of statistical checks is that they can be programmed and the programs can be updated and checks be re-run to retrospectively check older data. This assumes that error checking rules have been set up, which sometimes does not occur immediately. But in isolation from a more global review that a physician can do on focused monitoring, a statistical look may be inadequate. Electronic data capture has a number of advantages. Automatic time stamping of records can be important clues for detecting data forgery. Patients may not like to have study personnel entering data on a computer. It is important to train personnel to have eye contact with the patient. The amount of juggling of tasks that site personnel are responsible for must be recognized. And source documentation is sometimes needed in case of audits by regulatory authorities. What kind of monotoring is appropriate for what kind of studies? Every site should be visited at least once. But helping people understand the protocol, checking the quality of interactions of personnel within site, and other issues should be combined with this visit. A monitor should query the research nurses to verify their actual study time allocations. Industry-sponsored trials tend to have larger number of small centers. This has major cost implications. And if the sites are very small, their data will be difficult to analyze. However their impact on the overall analysis will be very small. CROs are contributing to the negative economic impact of monitoring because of it being such a large part of their revenue. Bias is a main problem, so are visits less important when the endpoint is "hard"? The public trust is a valuable commodity and sometimes low-yield approaches to problem prevention must be undertaken. Politicians are risk-aversed especially about errors of omission. However low-cost methods are generally be better for detection. Some things can be sorted out by telephone. Before sufficient data are available, how does one specify a monitoring plan? This issue is especially important for unblinded (e.g., surgical) trials. Quality should be defined. Proper patient enrollment is extremely important. Perhaps one of the most important criterion for whether a site should be initially planned for monitoring is the track record and credentials of the site. Most of the money savings should come by avoiding later visits. Thinking about visit costs vs. what else the study could have answered (e.g., a biomarker substudy) is a useful frame of reference. An across-study database documenting site history could be invaluable. For non-industry trials such data are harder to come by and design characteristics need more emphasis. Should studies with short duration of follow-up be monitored more frequently? More research into fraud risk factors is needed, with derivation of a risk score. Motivation for dishonesty is primarily money and prestige, even at very small sites. Non-malevalent data errors can sometimes be detected by analysis of electronic audit trails. Statistical methods for central monitoring should continue to be developed, and simple high-resolution raw-data graphics. We also need to learn how to interpret audit analysis results. The field of provider profiling/league tables can aid in interpretation so that natural variation across sites is not flagged. Making graphics with random site assignments can also help one calibrate the variation. Analysis of keystrokes if the electronic data capture saves them in an audit trail, can also be helpful. More data about data errors/fraud are needed before the strongest argument for reducing monitoring can be made. Extremely important is the complementary nature of central and on-site monitoring, as well as regulator buy-in. We should be doing more central monitoring no matter what changes are made in site monitoring emphasis.ef We also have to educate regulators and the public about the small impact of fraud/errors in a single site. ------------- Need to clearly define roles, e.g., sites should not expect monitors to do some of their work. Sites often ask for more monitoring when there is insufficient local resources. For planning purposes it's best to start with no monitoring and add what is really needed. For consent process monitoring it may be best to survey participants to find out what they think they consented to, etc. Webcams and computer-based conferencing can also be used to reduce on-site visits. Challenges: the legal system, paperwork required by regulators (e.g., FDA inspectors), getting sites unacustomed to having monitors doing some of their work. How sites are established and their infrastructure built is very important. More budget could be devoted to statistician time for statistical monitoring.