Advanced collaboration between scientists trained in several different fields has become essential in many areas of scientific research. Interdependence, joint ownership and collective responsibility for data and data analysis is needed among many modern research teams, even those housed in the same wing of the same institution. In the face of this need there are well-established challenges to collaboration: distance, common interests, common goals, incentives, trust, organizational and legal barriers and more. Some research teams have met theses challenges and are far ahead of the collaboration curve; others lag far behind. In general, there’s a lot of room for improvement. Databases that share research findings in a timely and usable fashion are not common and research collaboration tools and best practice guidelines are still novel.
Informatics is a budding and crucial field. For now, almost every institution and area of scientific research defines informatics somewhat differently but the primary focus is always on technology — how computers can help us discover more. Some of the specific challenges currently being tackled by informatics includes integrating widely differing data formats in research datasets, standardizing data formats for future research collaboration efforts, identifying and pulling more data into depositories and architecting designs for data storage, retrieval and use, and developing new tools that can help analyze and sift through increasingly unmanageable volumes of “big data.” Despite the current attention being paid to these technology-centered challenges, there also needs to be more focus on big picture efforts. Informatics isn’t just about computer systems, but about our human ability to peer into research and make insights and connections and integrate new and helpful perspectives — in short, to do a better job of communicating. Flooding more advanced computer systems with more data will no doubt lead to remarkable breakthroughs, but being able to intuit patterns, applications, and trends by integrating and comparing the right sets of data, as well as sharing tools and best practices between disciplines could, on the other hand, lead to grand new applications, solutions, and discoveries.
Research study design is one of the most well-developed areas of science. Legions of experts have compiled and refined years of best practice guidelines on the proper design, conduct and analysis of research studies, covering everything from human subject protection to statistical methods. However, as the expectations of our information society continue to evolve at breakneck speed, holes have developed in best practice frameworks. Communication is one such shortcoming. Many studies with potentially far-reaching impact still allocate only a nominal budget for sharing findings and communicating these findings to the public — even the bare essentials like building a good website for the study, keeping it current, preparing policy briefs and press releases, and making important connections with other researchers in the field through social media, email, and other direct outreach (to the credit of researchers, conferences are also widely used as communication tools but these are not adequate by themselves to reach beyond peer communities). Other studies might have ambitious enrollment plans for potentially life-saving treatments but an inadequate budget for participant recruitment and enrollment, and no on-staff expertise for writing and designing compelling outreach materials. Databases are another shortcoming of modern research studies. Like communication components, data collection and analysis isn’t usually designed with sharing in mind. Data is kept under lock and key until journal articles are published, and comparison with other research datasets is rarely a consideration, let alone a practical objective. Study selection itself is a third shortcoming — the issue of whether many research studies are even necessary. Publish or perish pressures may be producing a glut of studies that didn’t need to be done in the first place or that should have been done better. In summary, designing studies with better communication and data components, increasing collaboration, and reducing the pressure to publish studies will all help improve current research study designs.
“Technology Transfer,” as the term is normally used, usually encompasses issues focused on acquiring and licensing patents. It’s an important focus of many higher education institutions who see more effective tech transfer programs (rightly or wrongly) as a potential economic engines for their universities and local economies. Some tech transfer organizations have had more success than others. The most recent survey conducted by the Association of University Technology Managers (AUTM), indicates that 11 of its roughly 200 member universities accounted for more than half of all the licensing and royalty revenues generated from university patents in 2010. In addition, only 16% of tech transfer offices retained enough of the generated revenue to cover their ongoing costs, meaning that the vast majority of these offices run at a perpetual deficit. This lack of adequate funding is one reason why some tech transfer offices are more successful than others. Another reason is institutional capacity for communication-related functions: successful offices add value to ideas and have business, marketing, and communications expertise in-house or on-call to develop promising ideas and shepherd them into the marketplace. Yet another reason is communication itself: involved transfer processes like in pharmaceutics requires keen attention to communication processes — between technical teams, teams and regulators, different organizations, and more. Focus is another issue that could be improved. Tech transfer doesn’t need to single out only on patentable technology; with adequate investment and staffing it can and should also focus more on science.