Back To: Home : Featured Technology : Functional Genomic Screening


Q&A: Jonathan Sheldon, CSO, InforSense
December 2005


Technological developments in IT are letting companies think of drug discovery in a whole new way—replacing the unidirectional drug pipeline with a continuum of triggers and impacts. Recently, Executive Editor Randall C Willis spoke with Dr. Jonathan Sheldon, CSO of integrated analytics specialist InforSense, about the company's views on the ever-expanding informatics environment.
DDN: How does InforSense view the distribution and integration of information and IT across platforms and companies?
Sheldon: InforSense believes that knowledge should be accessible to anyone who needs to make a decision, whether in the scientific intelligence or business fields. Achieving this goal requires a reduction in the barrier for the end user by simplifying their data analysis and removing the dependence on IT specialists to provide tailored solutions. In other words, putting control back into the hands of the domain expert.
For a large organization like a pharmaceutical company, enabling scientists to derive this   intelligence for themselves requires an environment in which individual users can access and integrate the specific data and tools they need. InforSense's horizontal integrative analytics technology means that companies can provide access to the complete range of information types and analytical tools within an integrated informatics environment from early-stage discovery through to the clinic.
DDN: Is the breaking down of the silo system of information storage and application just talk or are we starting to see movement?
Sheldon: Certainly over the last couple of years we have seen increasing investment in approaches that enable cross domain research both within major pharma and biomedical research centers. Strategies such as systems biology, pharmacogenomics, and biomarker discovery all have in common the requirement to access and integrate data and methods that cut across traditional silos: biology and chemistry, biology and clinical, chemistry and pre-clinical.
The key drivers are the needs to control downstream risk and to develop a balanced portfolio of projects. The challenge for an organization is to use all you know so that if you are going to fail with a particular drug candidate, then it is going to happen before starting expensive clinical trials. Achieving this gain requires a shift in both information infrastructure and organizational culture.
DDN: Can you give examples where late-stage information impacts next-generation discovery and development; the "think backward" scenario?
Sheldon: Clinical data generated in clinical trials or actually in real clinical practice is an incredibly valuable resource which, until recently, has been massively under-utilized in discovery. Many approaches rely on accurate stratification of samples/patients into subsets, for example responders versus non-responders. If this classification is inaccurate then the subsequent experiments, such as genomic or proteomics studies, are largely worthless. To overcome this barrier between discovery and the clinic, many technical, regulatory and cultural issues need to be resolved. Although this transition is clearly starting to happen now in many large pharma, it is a slow process owing to the fundamental nature of the issues to be addressed.
It's likely that biomedical research organizations that have access to a wealth of real-life clinical data (not just tightly controlled clinical trials) and are less organizationally "restrained" will set the pace here. For example, the Windber Research Institute, headed by Prof. Michael Liebman, has recognized the impact that clinical transaction data can have on the direction of their research programs. In Windber's case, this data is available via their close collaboration with the Walter Reed Army Medical Center.
DDN: How about the "think forward" strategy?
Sheldon: Thinking forward means considering the next 3 or 4 subsequent stages in the drug discovery process, and not just the stage that a drug discovery program is actually in. This was one of the main reasons that the genomics era didn't deliver as we had hoped—too much focus was placed on "is this target disease-associated?" without considering the feasibility of a chemist coming up with a compound to modulate its activity.
Traditionally, the interest has been at the boundary from one stage to the next—does the target or compound pass a set of criteria that in the past has led to a successful drug?—a "best practice checklist". By looking forward to subsequent checklists, companies can consider future issues in earlier decisions, and in this way attempt to maximize the likelihood of success. From an informatics perspective, this requires a truly horizontal platform that enables you to be able to access data and applications from any stage in the drug discovery process in a manner that is appropriate for your level of "know how" for that particular domain; you may not be the expert. It also requires that this process knowledge (i.e., the way the data and applications are linked together) can be captured and mined for best practice, thereby enhancing these checklists over time as the size of the drug discovery data sets increases.
DDN: What are your thoughts on translational medicine and the role of IT in making it a reality?
Sheldon: Translational medicine is becoming ever more achievable, but the success of the approach is crucially dependent on the provision of informatics capabilities also as a continuum. Some organizations have adopted an approach based on hard-coding the integration of specific software tools for genomics, proteomics, metabonomics, etc., and the development of a monolithic data warehouse to accommodate the diverse data types and volumes. This has the disadvantage of inflexibility (not to mention maintenance overheads)—the warehouse structure is based on providing answers to pre-existing questions and that isn't how you do research. 
The alternative of adopting a patient-centric approach coupled with a highly flexible data and analytic structure provides the inherent flexibility required by translational studies. As a result, the R&D process can be effectively redefined when necessary without requiring major architectural changes to the underlying analytics and data infrastructure.
DDN: What is next for InforSense?
Sheldon: One key development is enhanced ability to browse clinical data using an interface that doesn't have the dimension limitations experienced with existing interfaces, particularly OLAP tools. This capability is critical for clinical data where it is not uncommon to have to query across hundreds of parameters captured for an individual patient. Patient subsets can be selected which then feed into the analytic workflows for more complex cross-domain analysis. As a result we can build predictive models of, for example, response or toxicity very easily. We are developing this technology in close collaboration with the Windber Research Institute and the Walter Reed Army Medical Center as a powerful tool for clinicians.
Also, the upcoming new version of our flagship integrative analytics platform, InforSense KDE 3.0, has been engineered to effectively support just this type of translational medicine strategy, in terms of the comprehensive data model and analytic workflow capabilities, enhanced usability for both expert and end-users, and robust performance across large, distributed organizations.
Code: E120509



Published by Old River Publications LLC
19035 Old Detroit Road
Rocky River, OH USA 44116
Ph: 440-331-6600  |  Fax: 440-331-7563
© Copyright 2018 Old River Publications LLC. All righs reserved.  |  Web site managed and designed by OffWhite.