National Big data Research and development initiative

white house big data research and development initiative and big data research and development initiative pdf
HelenaColins Profile Pic
HelenaColins,New Zealand,Professional
Published Date:06-07-2017
Your Website URL(Optional)
Comment
THE FEDERAL BIG DATA RESEARCH AND DEVELOPMENT STRATEGIC PLAN THE NETWORKING AND INFORMATION TECHNOLOGY RESEARCH AND DEVELOPMENT PROGRAM April 2016 MAY 2016 FEDERAL BIG DATA RESEARCH AND DEVELOPMENT STRATEGIC PLAN Executive Summary 1 A national Big Data innovation ecosystem is essential to enabling knowledge discovery from and confident action informed by the vast resource of new and diverse datasets that are rapidly becoming available in nearly every aspect of life. Big Data has the potential to radically improve the lives of all Americans. It is now possible to combine disparate, dynamic, and distributed datasets and enable everything from predicting the future behavior of complex systems to precise medical treatments, smart energy usage, and focused educational curricula. Government agency research and public-private partnerships, together with the education and training of future data scientists, will enable applications that directly benefit society and the economy of the Nation. To derive the greatest benefits from the many, rich sources of Big Data, the Administration announced a 2 “Big Data Research and Development Initiative” on March 29, 2012. Dr. John P. Holdren, Assistant to the President for Science and Technology and Director of the Office of Science and Technology Policy, stated that the initiative “promises to transform our ability to use Big Data for scientific discovery, environmental and biomedical research, education, and national security.” The Federal Big Data Research and Development Strategic Plan (Plan) builds upon the promise and excitement of the myriad applications enabled by Big Data with the objective of guiding Federal agencies as they develop and expand their individual mission-driven programs and investments related to Big Data. The Plan is based on inputs from a series of Federal agency and public activities, and a shared vision: We envision a Big Data innovation ecosystem in which the ability to analyze, extract information from, and make decisions and discoveries based upon large, diverse, and real- time datasets enables new capabilities for Federal agencies and the Nation at large; accelerates the process of scientific discovery and innovation; leads to new fields of research and new areas of inquiry that would otherwise be impossible; educates the next generation 3 of 21st century scientists and engineers; and promotes new economic growth. The Plan is built around seven strategies that represent key areas of importance for Big Data research and development (R&D). Priorities listed within each strategy highlight the intended outcomes that can be addressed by the missions and research funding of NITRD agencies. These include advancing human understanding in all branches of science, medicine, and security; ensuring the Nation’s continued leadership in research and development; and enhancing the Nation’s ability to address pressing societal and environmental issues facing the Nation and the world through research and development. Strategy 1: Create next-generation capabilities by leveraging emerging Big Data foundations, techniques, and technologies. Continued, increasing investments in the next generation of large-scale data collection, management, and analysis will allow agencies to adapt to and manage the ever- increasing scales of data being generated, and leverage the data to create fundamentally new services and capabilities. Advances in computing and data analytics will provide new abstractions to deal with complex data, and simplify programming of scalable and parallel systems while achieving maximal performance. Fundamental advances in computer science, machine learning, and statistics will enable future data-analytics systems that are flexible, responsive, and predictive. Innovations in deep learning will be needed to create knowledge bases of interconnected information from unstructured data. Research into social computing such as crowdsourcing, citizen science, and collective distributed tasks will help develop techniques to enable humans to mediate tasks that may be beyond the scope of 1 FEDERAL BIG DATA RESEARCH AND DEVELOPMENT STRATEGIC PLAN computers. New techniques and methods for interacting with and visualizing data will enhance the “human-data” interface. Strategy 2: Support R&D to explore and understand trustworthiness of data and resulting knowledge, to make better decisions, enable breakthrough discoveries, and take confident action. To ensure the trustworthiness of information and knowledge derived from Big Data, appropriate methods and quantification approaches are needed to capture uncertainty in data as well as to ensure reproducibility and replicability of results. This is especially important when data is repurposed for a use different than the one for which the data was originally collected, and when data is integrated from multiple, heterogeneous sources of different quality. Techniques and tools are needed to promote transparency in data-driven decision making, including tools that provide detailed audits of the decision-making process to show, for example, the steps that led to a specific action. Research is needed on metadata frameworks to support trustworthiness of data, including recording the context and semantics of the data, which may evolve over time. Interpreting the results from analyses to decide upon appropriate courses of action may require human involvement. Interdisciplinary research is needed in the use of machine learning in data-driven decision making and discovery systems to examine how data can be used to best support and enhance human judgment. Strategy 3: Build and enhance research cyberinfrastructure that enables Big Data innovation in support of agency missions. Investment in advanced research cyberinfrastructure is essential in order to keep pace with the growth in data, stay globally competitive in cutting-edge scientific research, and fulfill agency missions. A coordinated national strategy is needed to identify the needs and requirements for secure, advanced cyberinfrastructure to support handling and analyzing the vast amounts of data, including large numbers of real-time data streams from the Internet of Things (IoT), available for applications in commerce, science, defense, and other areas with Federal agency involvement—all while preserving and protecting individual privacy. Shared benchmarks, standards, and metrics will be essential for a well-functioning cyberinfrastructure ecosystem. Participatory design is necessary to optimize the usefulness and minimize the consequences of the infrastructure for all stakeholders. Education and training to build human capacity is also critical: users must be properly educated and trained to fully utilize the tools available to them. Strategy 4: Increase the value of data through policies that promote sharing and management of data. More data must be made available and accessible on a sustained basis to maximize value and impact. The scale and heterogeneity of Big Data present significant challenges in data sharing. Encouraging data sharing, including sharing of source data, interfaces, metadata, and standards, and encouraging interoperability of associated infrastructure, improves the accessibility and value of existing data, and enhances the ability to perform new analyses on combined datasets. Building upon the current state of best practices and standards for data sharing, as well as developing new technologies to improve discoverability, usability, and transferability for data sharing, will enable more effective use of resources for future development. Research is necessary at the “human-data” interface to support the development of flexible, efficient, and usable data interfaces to fit the specific needs of different user groups. Federal agencies that provide R&D funding can assist through policies to incentivize the Big Data and data science research communities to provide comprehensive documentation on their analysis workflows and related data, driven by metadata standards and annotation systems. Such efforts will encourage greater data reuse and provide a greater return on research investments. Strategy 5: Understand Big Data collection, sharing, and use with regard to privacy, security, and ethics. Privacy, security, and ethical concerns are key considerations in the Big Data innovation ecosystem. Privacy concerns affect how information is viewed and managed by data collectors and data providers; security concerns about personal information demand attention to data protection; and 2 FEDERAL BIG DATA RESEARCH AND DEVELOPMENT STRATEGIC PLAN ethical concerns about the possibilities of data analyses leading to discriminatory practices have reignited civil rights debates. Research in Big Data is necessary to understand and address the variety of needs and demands of different application domains to achieve practical solutions to challenges in data privacy, security, and ethics. New policy solutions may be necessary to protect privacy and clarify data ownership. Techniques and tools are needed to help assess data security, and to secure data, in the highly distributed networks that are becoming increasingly common in Big Data application scenarios. The ability to perform comprehensive evaluations of data lifecycles is necessary to determine the long- term risk of retaining, or removing datasets. Additionally, the Nation must promote ethics in Big Data by ensuring that technologies do not propagate errors or disadvantage certain groups, either explicitly or implicitly. Efforts to explore ethics-sensitive Big Data research would enable stakeholders to better consider values and societal ethics of Big Data innovation alongside utility, risk, and cost. Strategy 6: Improve the national landscape for Big Data education and training to fulfill increasing demand for both deep analytical talent and analytical capacity for the broader workforce. A comprehensive education strategy is essential to meet increasing workforce demands in Big Data and ensure that the United States remains economically competitive. Efforts are needed to determine the core educational requirements of data scientists, and investments are needed to support the next generation of data scientists and increase the number of data-science faculty and researchers. As scientific research becomes richer in data, domain scientists need access to opportunities to further their data-science skills, including projects that foster collaborations with data scientists, data-science short courses, and initiatives to supplement training through seed grants, professional-development stipends, and fellowships. In addition, employees and managers in all sectors need access to training “boot camps,” professional-development workshops, and certificate programs to learn the relevance of Big Data to their organizations. More university courses on foundational topics and other short-term training modules are also necessary to help transform the broader workforce into data-enabled citizens. Data-science training should extend to all people through online courses, citizen-science projects, and K- 12 education. Research in data-science education should explore the notion of data literacy, curricular models for providing data literacy, and the data-science skills to be taught at various grade levels. Strategy 7: Create and enhance connections in the national Big Data innovation ecosystem. Persistent mechanisms should be established to increase the ability of agencies to partner in Big Data R&D both by removing the bureaucratic hurdles for technology and data sharing and by building sustainable programs. One such possible mechanism is the creation of cross-agency development sandboxes or testbeds to help agencies collaborate on new technologies and convert R&D output into innovative and useful capabilities. Another is the development of policies to allow for rapid and dynamic sharing of data across agency boundaries in response to urgent priorities, such as national disasters. A third is the formation of Big Data “benchmarking centers” that focus on grand challenge applications and help determine the datasets, analysis tools, and interoperability requirements necessary in achieving key national priority goals. And, finally, a national Big Data innovation ecosystem needs a strong community of practitioners across Federal agencies to facilitate rapid innovation, ensure long-term propagation of ideas, and provide maximal return on research investments. 3 FEDERAL BIG DATA RESEARCH AND DEVELOPMENT STRATEGIC PLAN Introduction The Federal Big Data Research and Development Strategic Plan (Plan) defines a set of interrelated strategies for Federal agencies that conduct or sponsor R&D in data sciences, data-intensive applications, and large-scale data management and analysis. These strategies support a national Big Data innovation ecosystem in which the ability to analyze, extract information from, and make decisions and discoveries based on large, diverse, and real-time datasets enables new capabilities for both Federal agencies and the Nation at large; accelerates the process of scientific discovery and innovation; leads to new fields of research and new areas of inquiry that would otherwise be impossible; educates the next generation of 21st century scientists and engineers; and promotes new economic growth. In March 2012, the Obama Administration announced the Big Data Research and Development 4 Initiative to leverage the fast-growing volumes of digital data to help solve some of the Nation’s most pressing challenges. The Initiative calls for increasing government support and R&D investment to accelerate the Federal agencies’ ability to draw insights from large and complex collections of digital data. To augment Federal agency activities, the Administration reached out to other Big Data stakeholders in private industry, academia, state and local governments, and nonprofits and foundations to collaborate on new Big Data innovation projects. In November 2013, dozens of public 5 and private organizations gathered at an event, “Data to Knowledge to Action,” sponsored by the White House’s OSTP and the NITRD Program. Together, public and private partners announced an inspiring array of new projects that address such national priorities as economic development, healthcare, energy sustainability, public safety, and national security. In 2014, the NITRD Big Data Senior Steering Group (SSG) initiated a process to summarize findings and produce a coordinated Big Data R&D agenda. Through a series of internal workshops, NITRD agency representatives examined a range of game-changing ideas with the potential to drive Big Data innovations. The Big Data SSG then synthesized the body of ideas and information into a cross-agency framework. Public comment on this framework was solicited in a Request for Information and a workshop was convened at Georgetown University to engage non-government Big Data experts and stakeholders. This document is the result of these efforts. A primary objective of this document is to outline the key Big Data R&D strategies necessary to keep the Nation competitive in data science and innovation and to prepare for the data-intensive challenges of tomorrow. As a strategic plan, this document provides guidance for Federal agencies and policymakers in determining how to direct limited resources into activities that have the greatest potential to generate the greatest impact. The Plan profiles R&D areas that span multiple disciplines, surfacing intersections of common interest that could stimulate collaboration among researchers and technical experts in government, private industry, and academia. The Plan also offers ideas for decision makers to consider when deliberating about investments in Big Data in their respective domains. Additionally, this Plan is the Big Data SSG’s response to Recommendation 11c of the 2015 review of NITRD by the 6 President’s Council of Advisors on Science and Technology (PCAST) to “coordinate a process to publish and publicly discuss periodically a research and coordination plan for its area of interest.” The Plan is built around the following seven strategies that represent key areas of importance for Big Data research and development (R&D):  Strategy 1: Create next-generation capabilities by leveraging emerging Big Data foundations, techniques, and technologies. 4 FEDERAL BIG DATA RESEARCH AND DEVELOPMENT STRATEGIC PLAN  Strategy 2: Support R&D to explore and understand trustworthiness of data and resulting knowledge, to make better decisions, enable breakthrough discoveries, and take confident action.  Strategy 3: Build and enhance research cyberinfrastructure that enables Big Data innovation in support of agency missions.  Strategy 4: Increase the value of data through policies that promote sharing and management of data.  Strategy 5: Understand Big Data collection, sharing, and use with regard to privacy, security, and ethics.  Strategy 6: Improve the national landscape for Big Data education and training to fulfill increasing demand for both deep analytical talent and analytical capacity for the broader workforce.  Strategy 7: Create and enhance connections in the national Big Data innovation ecosystem. 5 FEDERAL BIG DATA RESEARCH AND DEVELOPMENT STRATEGIC PLAN Strategies Strategy 1: Create next-generation capabilities by leveraging emerging Big Data foundations, techniques, and technologies As Big Data technologies mature, society will increasingly rely on data-driven science to lead to new discoveries and data-driven decision making as the basis of confident action. To address new challenges in Big Data, there should be continuous and increasing investments in research on technologies for large-scale data collection, management, analysis, and the conversion of data-to-knowledge-to-action; and on the privacy, security, and ethical issues of Big Data. In the past, Federal investments in foundational research in computer science—encompassing topics ranging from computer architecture and networking technologies to algorithms, data management, artificial intelligence, machine learning, and development and deployment of advanced cyberinfrastructure—have served as major drivers of the Nation’s successes in scientific discovery, Internet commerce, and national security. R&D investments by NITRD agencies resulted in the creation of the Internet that in turn enables today’s generation of Big 7 8 Data. NITRD agency-funded research in algorithms, such as PageRank and FastBit, resulted in the creation of robust indexing and search engine capabilities. Most recently, the discovery of the Higgs boson was enabled by the development of algorithms to identify complex signals from petabytes of 9 data. Scale Up to Keep Pace with the Size, Speed, and Complexity of Data Big Data encompasses a range of data scenarios—from large and rapid data streams to highly distributed and heterogeneous data-collection networks. Big Data contexts may require high- performance and complex processing of data, and very large warehouses and archives for data storage. 10 As highlighted by the National Strategic Computing Initiative, there is a need to scale up computing systems to deal with the sizes, rates, and extreme syntactic (format) as well as semantic (meaning) heterogeneity of such data. Further, for human users, the overall system must provide highly interactive, easy-to-use interfaces to allow human users to be “in the loop” to control the system, as well as to use the information and knowledge products generated by it. Many NITRD agencies are tasked with the development and maintenance of major scientific experiments, observations, and simulations that generate unprecedented volumes of data. Scientists increasingly want to integrate these datasets to facilitate discovery. Dedicated networks are needed to transport large volumes of scientific data generated at experimental facilities (such as the Large Hadron Collider at CERN or the new Linac Coherent Light Source at SLAC National Accelerator Laboratory) to distant and, in some cases, distributed computing resources for analysis. Data-management bottlenecks can occur at almost every stage of the scientific workflow including capturing data from an experimental or computing facility, transporting it for further analysis, and analyzing and visualizing the data, as well as finding appropriate environments for sharing data. A range of computer system architectures are required to serve the wide range of applications requirements—from tightly interconnected systems to more loosely coupled, distributed systems. Large system configurations, high-speed network interconnects, deep memory hierarchies, and high performance storage systems will be required in order to process large-scale and high-speed data interactively. These systems must be resilient and autonomic to deal with hardware and software faults and failures. New abstractions will be necessary to simplify the challenges of programming such systems 6 FEDERAL BIG DATA RESEARCH AND DEVELOPMENT STRATEGIC PLAN and exploiting parallelism for scheduling computation, communication, and output for interactive as well as batch-oriented Big Data applications. The proper set of abstractions must be provided to enable applications to specify their resource requirements and execute efficiently in an environment with shared resources. Naive scaling of current tools and techniques will not be sufficient as Big Data applications confront and supersede hard limits in, for example, input/output data rates from computing systems, or the amount of data that a human can perceive or understand, even using visualization. Biases may need to be introduced for the sake of tractability, requiring fundamentally new techniques and understanding. Along with convergence in architectural approaches, there are opportunities for coordination and collaborations between computational science (the Third Paradigm) and data science (the Fourth Paradigm). In many scenarios, complex computational models are validated via evaluation-driven research programs where Big Data is collected and analytics are measured in experimental settings. Conversely, many Big Data problems lead, eventually, to the creation and execution of computational models. Many techniques, tools, and approaches can be shared between both communities, especially if investments in measurement science yield metrics and evaluation frameworks that are generalizable across the many challenges that exist between computational and data science. What emerges is a fundamentally new workflow for scientific discovery where simulation and experimental data are 11 inextricably linked. Advances in computation and data analysis need to be coordinated. Big Data application scenarios are typically characterized by large-scale system configurations—for example, within a datacenter, across widely distributed datacenters, or across the IoT. With rapid changes in hardware and software technologies, and evolving applications needs and requirements, the notion of Software Defined Environments (SDE) or Software Defined Infrastructure (SDI) becomes important within the context of such large configurations. Different types of applications, or different phases within a given application, may require different configurations among various system components. In many cases, Big Data applications may execute within a cloud environment where the cloud provider provides a generic system that can be customized for a particular computation using SDE. In other scenarios, when the cost of moving data is too high and latencies become a major roadblock, future infrastructures may help move the computation to the data. National platforms, such as the 12 Global Environment for Network Innovations (GENI) must continue to provide the large-scale experimental testbeds to carry out such research. In addition, public-private, national-state level 13 partnerships, such as US Ignite, are necessary to foster novel applications and digital experiences. Big Data applications must deal with data from multiple sources that may be heterogeneous in a variety of ways, such as the syntax and semantics of the data, the quality of the data, and the policy regime under which the data was produced and by which it can be used. Core technical infrastructure to enable representation of semantic information is a key next-generation capability for Big Data. A core capability for enabling such applications is a semantic information infrastructure to enable easier discovery of relevant data and integration across related datasets. Scalable approaches are needed to tackle the full scope of this problem. Techniques employed can range from automated machine-learning algorithms to human-in-the-loop approaches, including crowdsourcing methods. One of the critical technology components is named entity identification, to assist in transforming unstructured data to structured data. New directions for research are opening up in this area as well as in data quality, which is being explored by a number of agencies. 7 FEDERAL BIG DATA RESEARCH AND DEVELOPMENT STRATEGIC PLAN DHS Homeland Security Advanced Research Projects Agency (HSARPA) CONNECTED, PROTECTED, AND FULLY AWARE Thick smoke, scorching heat, and blaring alarm bells fill the building. Over 60 pounds of protective gear and countless hours of training help support our Nation’s firefighters, but timely and accurate situational awareness or “scene size-up” remains a challenge on every call. The Department of Homeland Security (DHS) operates the National Fire Incident Report System (NFIRS) to gather and analyze information on the Nation's fires. Despite today’s education, research, and training efforts, fires kill over 3,000 people and injure over 17,000 people nationwide each year. Annual property loss due to fire approaches 12 Image courtesy of the Homeland Security Advanced billion. Most of these fatalities, injuries, and losses Research Projects Agency. are preventable. In partnership with the Federal Emergency Management Agency (FEMA) and the US Fire Administration (USFA), HSARPA developed an analytical prototype and worked with four regional fire departments to explore 225 million NFIRS incidents at the national, state, and regional levels. Big Data technologies such as geospatial and graph analytics were used to identify trends and patterns about incident types, equipment failures, and firefighter casualties, delivering new insights on how to improve training and reduce losses. The IoT is a rapidly emerging source of Big Data. Estimates are that by 2018 over half of Internet traffic 14 will originate not from computers but from devices. The devices in this category include sensors of all types, mobile phones, and other consumer and industrial electronic devices. Increasingly, instrumented systems or environments are becoming the norm in science and engineering scenarios. Federal agency- funded scientific research is pioneering new approaches, such as the National Ecological Observatory 15 Network (NEON), that are deploying large numbers of heterogeneous sensors and sensor networks that will collect large amounts of heterogeneous data. New tools are needed to unify and organize this information into human- and machine-readable summaries in a timely fashion. Many of these environments are characterized as cyber-physical systems because they seamlessly integrate computational algorithms and physical components. New technologies for handling the breadth and scope of data from IoT will be essential for many future Big Data applications. Develop New Methods to Enable Future Big Data Capabilities Cutting-edge data management, querying, and analysis techniques in computer science must be linked with fundamental approaches in statistics and machine learning to create data systems that are flexible, responsive, and predictive. Computer-science techniques need to incorporate more statistical approaches, while statistical techniques need to develop approaches for trading off statistical power and computational complexity. This merging of computer science and statistical techniques will usher in a “Smart Data” era with enormous opportunities for new applications. However, the scalability of statistical methods also poses a major challenge. When data becomes big, the possible number of 16 simultaneous hypotheses, as well as data points, can be on the order of millions. Robust statistical algorithms may not run within an acceptable time frame, forcing users to rely on less sophisticated and more error-prone algorithms. Integration of statistical inference principles as part of Big Data will be essential to resolve these challenges. 8 FEDERAL BIG DATA RESEARCH AND DEVELOPMENT STRATEGIC PLAN A suite of tools and best practices is also needed for real-time statistical inference using data streams. These tools and best practices should process data-quality information, be scalable, and be able to support a broad range of application areas, such as security, bioinformatics, consumer behavior, climate, civic infrastructure, and demographics. A large class of new Big Data applications is required to deal with diverse sources and forms of data, ranging from highly structured to unstructured data. Data-driven model development is a key approach to extracting structure and meaning from Big Data. Machine-learning techniques are essential to this endeavor. Research is needed in deep learning methods that can add identification and predictive power to data and algorithms. The structure of the human brain, based on studies from the BRAIN 17 Initiative, may itself provide new insights and inspiration for a new generation of neural network algorithms and computing architectures, and lead to research in areas such as neuromorphic computing. DOD Defense Advanced Research Projects Agency (DARPA) MAKING SENSE OUT OF THE COMPLEX Some of the systems that matter most to us are very complicated. Ecosystems, the brain, social systems, and the economy have many parts and processes. These processes, however, are often studied piecewise, and the literature and data on them can be fragmented, distributed, and inconsistent. Although the collection of Big Data is increasingly automated, the creation of big mechanisms (that is, the full explanations of complicated systems), remains a human endeavor. Illustration courtesy of the Defense Advanced Research Projects Agency. DARPA’s Big Mechanism program aims to speed up the scientific research process by enabling machines to read, synthesize, and reason about complicated systems. The goal of the program is to develop technologies for a new kind of science in which research is integrated immediately into causal, explanatory models of unprecedented completeness and consistency. The first challenge taken on by Big Mechanism researchers is cancer biology. The program is working on machine reading of scientific papers to identify molecular interactions in signaling pathways, modeling languages to integrate fragments of knowledge into large models, and algorithms to identify possible drug targets that show promise. Tools developed by the Big Mechanism program may enable a new kind of scholarship, in which scientists model and understand entire systems, not just system components. While automated techniques can greatly improve productivity, humans continue to perform some tasks better, such as in data identification, curation, and categorization. Systems for human-aided computation can range from the computational tools that put scientists and experts “in the loop” to expansive platforms for crowdsourcing and citizen science. These “social computing” systems employ various methods of engagement, including social media, peer production, crowdsourcing, and collective 18 19 distributed tasks. In many citizen-science projects, such as Galaxy Zoo and Foldit, and collaboration experiments like the DARPA Network Challenge, human volunteers provided insights and discoveries 9 FEDERAL BIG DATA RESEARCH AND DEVELOPMENT STRATEGIC PLAN that expert analysis missed. The 2013 PCAST review of NITRD identified social computing as an area ripe for further attention and investment, particularly noting the potential of mobilizing citizens to address 20 national priorities in health, public safety, and science. Incorporating human workers (volunteer or paid) into data-processing workflows could augment analysis capabilities and benefit many applications. DOE Advanced Scientific Computing Research program (ASCR) INTERACTIVE EXPLORATION OF ENORMOUS DATA “A picture is worth a thousand words,” but how do you picture a billion data values computed by a simulation? The idea is daunting. Nevertheless, the In Situ Big Data Visualization of Scientific Climate Data project, funded by DOE ASCR, aims to understand the impact of climate change on the Nation’s power infrastructure (e.g., power plants). To achieve this ambitious goal, scientists need the next generation of climate simulations and exascale supercomputers. Image courtesy of Los Alamos National Laboratory. Current practices for analyzing simulation data rely on moving the data from the supercomputer that runs simulations to another computing environment for manipulation and analysis. In the era of exascale computing, such practices will not be feasible because of the vastly larger datasets involved and the very limited bandwidth and storage for moving and saving data. This project will use “in situ” analysis, which will allow scientists to process the data before it is moved, thereby transporting and storing only the data that provides new insights and discoveries. The In Situ Big Data Visualization of Scientific Climate Data project brings together an in situ workflow for data reduction and a visualization tool called ParaView. The combination will allow researchers to interactively explore simulations and extract meaningful information from datasets that would otherwise be inscrutable. These new tools and techniques will help scientists visualize, analyze, and understand the potential impacts of climate on our energy production and power infrastructure over long periods of time and for specific regions. In addition, the tools and techniques are transferable to other science disciplines that face similar challenges with their data. Given the complexity of Big Data, there are key roles for metadata, uncertainty information, and quality visualizations to play in understanding its significance. Further research and innovation is needed to detect both the expected and unexpected; provide timely, defensible, and understandable assessments; and communicate those insights effectively. This will require multi-user, multi-stakeholder engagements that are equipped with the necessary collaborative environments and tools. Many Big Data analyses begin with discovering correlations among factors, which can provide important insights into underlying phenomena. Research and advances in measurement science are needed in hypothesis generation, causal inference, and other fundamental statistical methods using Big Data. Progress in these areas will allow researchers to obtain better insights and recognize spurious correlations, which might lead to incorrect conclusions. Robust techniques are also needed for representing and processing data quality information. 10 FEDERAL BIG DATA RESEARCH AND DEVELOPMENT STRATEGIC PLAN Strategy 2: Support R&D to explore and understand trustworthiness of data and resulting knowledge, to make better decisions, enable breakthrough discoveries, and take confident action Traditional statistical approaches are used to handle “designed” datasets derived from controlled experiments or surveys. Big Data is often comprised of such designed data, but also may include data collected “opportunistically,” or collected for one purpose and reused for another. The data may have been processed through a series of incremental analyses, each for a different purpose. These characteristics make it challenging to provide a holistic view of the data and the uncertainty in the underlying information. This major challenge in today’s Big Data landscape is due, in part, from the rise in the sharing and availability of data facilitated by the Internet and promoted by several U.S. Government policies. There is also a growing recognition among data scientists that access to relevant data is essential for building on previous results. The consequence is that data users are increasingly removed from the generators or collectors of the data they use. As the expectation for data access grows, data users will need to understand and assess how data can be used and whether the source is trustworthy. However, deriving accurate knowledge from data puts burdens on both the data disseminator and the data user. Data that is appropriately documented, formatted, and accompanied by complete and meaningful metadata facilitates confident use. Robust measures are needed to quantify uncertainty and capture context to ensure reproducibility of results; this will give decision makers the ability to validate the trustworthiness of the data and the products of analyses. Decision makers will require tools for parsing the relevant knowledge applicable to decisions, converting knowledge into possible action, and understanding the implications and impact of those actions. Understand the Trustworthiness of Data and Validity of Knowledge In the Information Quality Act of 2001, the White House Office of Management and Budget (OMB) 21 recognized the importance of the quality of data disseminated by Federal agencies. In general, determining overall “trustworthiness” of data is a challenging task—the definition of the term may vary depending upon the application scenario and use-case. In one framework employed in the social 22 sciences, trustworthiness is delineated into the truthfulness (i.e., internal validity or credibility), applicability (i.e., external validity or transferability), consistency (i.e., reliability or dependability), and neutrality (i.e., objectivity or confirmability) of the information. Inferences based on data would need to account for which of the characteristics are satisfied by the data. The scale, heterogeneity, and rapidly changing nature of Big Data further complicate the notions of trustworthiness and inference. Traditional tests for validity, soundness, and significance may need to be modified and adapted. Quantities such as accuracy, error, precision, anomalies, authenticity, and uncertainty may need to be measured and tracked to enable robust data-driven decision making. Software may be required to track the propagation of these quantities through multi-step and iterative processing pipelines, complex transformations, and integration across heterogeneous data sources. Understanding data trustworthiness is essential in order to derive accurate inferences from Big Data. Research is needed to develop robust statistical techniques that use a wide diversity of data inputs. Interdisciplinary research is also needed to create state-of-the-art techniques that combine heuristics (trial and error) and statistics. A key differentiator of Big Data is the ex post facto discovery of uses for previously collected data. Another is the combining of independently collected datasets, each of which 11 FEDERAL BIG DATA RESEARCH AND DEVELOPMENT STRATEGIC PLAN may fulfill different assumptions. Interpreting the output quality of statistical tests on this type of data depends upon the specific type of statistical test being performed, i.e., the nature of the question being asked, which may vary by application. Traditional statistical tests may be insufficient. New, innovative techniques may be needed for Big Data. In data science, replicability is the ability to rerun the exact data experiment—with the same data inputs, parameter settings, and computations—to produce exactly the same result. Reproducibility is the ability to use different data, techniques, and/or equipment to confirm the same result as previously obtained. Both are fundamental to the validations of results and conclusions drawn from data. NITRD agencies are interested in replicability and reproducibility for science R&D applications as well as for decision-making applications. However, both are challenging notions to implement in a Big Data computing environment, where datasets can be extremely large and constantly evolving. Both a common framework and a common understanding of these concepts are needed to help improve the trustworthiness of data and computed results. NIST Information Technology Laboratory (ITL) and Materials Measurement Laboratory (MML) BATTLING VISION LOSS IN AMERICA Slowly going blind is the challenge faced by over 7 million Americans with age-related macular degeneration (AMD). AMD is the leading cause of blindness in adults, but effective treatments, such as cell-based therapies, could offer a way to replace damaged eye tissue with healthy tissue and save the Nation 30 billion annually in lost GDP. However, to ensure the effectiveness of cell-based therapies, decisions on whether to implant tissues into the patient must be made based on trustworthy tissue images. Big Image Data, a joint ITL and MML project, addresses the need for high quality imaging measurements with a goal of achieving 10 times the quality of current medical image interpretation. This improvement will enable individuals with AMD to be diagnosed and to receive cell-based therapies with lower risks of adverse events. At the National Institute of Standards and Technology (NIST), the long-term goal is to provide validated methods for automated manufacturing and product release tests that will speed and improve the decision-making process. For example, the technologies developed within NIST can be reused in other fields such as materials reliability to promote trustworthy measurements. Data processing is typically performed via analysis pipelines. Comprehensive tools and best practices are needed to ensure that existing analysis pipelines can persist into the future, and that the results can be replicated at some future point in time. The use of open-source software, data, and Application Programming Interfaces (APIs) can be key enablers in this regard. This requires contextual information, well-defined metadata frameworks, and the ability to instantiate a past processing environment in the future. Tools such as the Code, Data, and Environment (CDE) package are able to overcome technical barriers to reproducibility by converting all software dependencies to a code that can be executed on Linux computers, other than the original Linux system. Containers, virtual machines, and packaging systems like CDE are useful tools for enabling replicability in Big Data, but these systems will need to evolve as technologies and analysis capabilities progress. Researchers can be incentivized to adopt good practices by requiring reproducible research strategies as part of their research activity. This ensures the reliability of their analyses and allows other researchers 12 FEDERAL BIG DATA RESEARCH AND DEVELOPMENT STRATEGIC PLAN to derive further value from past datasets and analyses. For example, the NIH’s “Principles and 23 Guidelines for Reporting Preclinical Research” is a prime example of guidance that agencies can provide to improve reproducibility within a specific domain. In addition to creating better metadata standards for future data collection, some Federal agencies must also maintain legacy scientific data whose use is limited because it lacks metadata. Agencies such as NASA are investing in technology for “data archeology” that strives to automate the generation of metadata from data content. Learning and using proper methods and protocols for data collection, analysis, and interpretation are an essential starting point for data trustworthiness. This requires a well-educated workforce that can stay current through on-going training programs as techniques and tools evolve. This is particularly important in the Big Data world, where data and information may have diverse origins and unpredictable use. Education and training in the authoring and use of metadata and, as mentioned in the next section, adoption of strong metadata standards, will be essential. NIH Center for Expanded Data Annotation and Retrieval (CEDAR) ESTABLISHING BETTER DATA FOR BETTER SCIENCE Imagine a library with an incomplete catalog that made it impossible to know where a resource might be located, what language it is written in, or whether it is a video or a document. Similar to a library’s indexing methods, metadata is needed to index large numbers of experimental datasets so that they can be retrieved, reused, and properly attributed. The challenge is that creating accurate and adequate Image courtesy of the Human Immunology Project Consortium. metadata is a tedious process. The Center for Expanded Data Annotation and Retrieval (CEDAR) at Stanford University, funded through the NIH, wants to help investigators in the sciences achieve the promise of Big Data by making the process of metadata creation as painless as possible. CEDAR’s goal is to create a unified framework that all scientific disciplines can use to create consistent, easily searchable metadata that allows researchers to locate the datasets that they need, consolidate datasets in one location, integrate multiple datasets, and reproduce the results. For example, CEDAR projects like the Human Immunology Project Consortium (HIPC) will enable widespread, free sharing of immunology data. This knowledge base will serve as a foundation for the future study of a variety of inflammatory diseases as well as immune-mediated diseases, such as allergy, asthma, transplant rejection, and autoimmune diseases. Significant efforts are needed to curate datasets—to record the context as well as semantics associated with the data, and with the analyses performed on the data. Effective and proper reuse of data demands that the data context be properly registered and that data semantics be extracted and represented. While automation will be essential to accomplish this for Big Data, tasks related to curation, context, and semantics will also require a human-in-the-loop approach. Tools and ecosystems are needed to assist in this task, such as entity identification that utilizes global persistent identifiers and the use of domain ontologies for knowledge representation. Research in metadata modeling, automated metadata generation and registration, semantic technologies, ontologies, linked data, data provenance, and data citation will be important. 13 FEDERAL BIG DATA RESEARCH AND DEVELOPMENT STRATEGIC PLAN Some metadata may change or evolve over time, depending on how the data is used, the presence of new datasets, etc. Ontologies may also evolve with the addition of new data and information. While well-defined metadata frameworks are essential for capturing this information, research will also need to take into account the evolution of such information over time. Tools may be needed to bridge data collections by retrofitting and migrating older metadata into schemas compatible with current and future collection efforts. Design Tools to Support Data-Driven Decision-Making From algorithmic stock trading to managing the smart electric grid, systems used today are increasingly automated. However, the vast majority of data-driven decision-making systems still require human intervention. In almost all complex decisions, humans must interpret the information generated from algorithms, determine the validity of the information in the given context, take policy considerations into account, and then decide upon an appropriate course of action. Research is needed on how technology can best augment human judgment in such data-driven decision-making scenarios, in order to better inform their choices as well as increase the speed at which trustworthy and confident decisions and actions can be taken. The next generation of data-driven decision-making systems must also be adaptive and have the ability to integrate analysis of real-time data flows with historical data. Human-mediated decision making will remain essential in these complex, multifaceted situations, where information may be derived from multiple heterogeneous sources. User-friendly interfaces to these complex decision support systems and environments will be needed. Another key requirement is to establish the provenance of data-driven decisions, with tools to make the decision-making process reproducible, traceable, and transparent. In order to build trust in data-driven systems, the tools must clearly and succinctly enumerate the steps by which conclusions were reached. Machine-learning approaches, including deep learning systems, are also needed to build better data- driven models that can be used to reliably augment human decision making. Artificial Intelligence (AI) research will have profound impacts on decision-making systems. Question and answer systems that automate answers to human-posed questions (e.g., IBM’s Watson), and systems that reason (e.g., DARPA’s Big Mechanism program) are examples of how AI can augment human capabilities. These systems are a resource for human decision makers, who can now leverage a much broader knowledge base as input while minimizing human biases in their output. Big Data SSG agencies recognize that research into the foundational areas of AI is critical to creating systems that make Big Data more actionable. Given the complexity of information that will be available in the future, decision-support systems must also be capable of functioning in a collaborative manner with multiple agents to satisfy multiple objectives. Humans are often asked to make decisions that satisfy a disparate and potentially divergent set of end goals. Intelligent agents must be capable of operating under similar circumstances. A decision-making system must be able to either reconcile these criteria within a recommended course of action or present clear options and consequences that allow a human to understand and decide, while also taking into account high-level policy considerations. 14 FEDERAL BIG DATA RESEARCH AND DEVELOPMENT STRATEGIC PLAN National Science Foundation (NSF) and the Treasury Office of Financial Research (OFR) SHINE A LIGHT INTO DARK CORNERS The 2007-2009 financial crisis made clear that our understanding of the financial system was deficient in many respects. Market participants and regulators underestimated how disruptions could emerge and spread quickly across interconnected companies and markets, with severe consequences for the economy. As a result, in 2010, Congress created the Office of Financial Research (OFR) to serve the needs of the Financial Stability Oversight Council and its member agencies. OFR’s mission is to shine a light in the dark corners of the financial system to see where risks are going, assess how much of a threat they might pose, and provide policymakers with the information, the policy tools, and the analysis to mitigate them. The OFR is a virtual research-and-data community that uses a collaborative approach to expand its capacity to meet urgent needs and complement the work of others in the financial sector. To accomplish its goals, the OFR has partnered with the National Science Foundation to support Big Data financial-stability research, policymaking, and decision-making. This program involves collaboration by computer scientists, statisticians, economists, social scientists, and financial experts in using Big Data tools and techniques to identify and assess risks to the financial stability of the United States. The research will help support a more transparent, efficient, and stable financial system. To ensure trustworthy, robust conclusions and decisions, it is necessary to fully understand the human as well as the technical components of a data-driven decision-making system. Research in augmenting human decision making with data-driven decisions is inherently socio-technical and interdisciplinary in nature. Understanding the nature of human behavior and cognitive biases when faced with different kinds of information is as critical as the validity of the information itself. Counterintuitively, presenting more data to a human can sometimes lead to a greater chance of erroneous conclusions. For example, studies have demonstrated that when presented with gender information, clinicians tend to under- 24 diagnose heart disease in women, as compared to men exhibiting the same symptoms. It is likely that an increasing number of decisions in the future will be mediated through Big Data knowledge processes. Decisions made by people in all walks of life will be informed by knowledge derived from Big Data. Clear communication of how that knowledge was created, and the set of robust conclusions that could be drawn from that knowledge, is valuable to experts and non-experts alike. Increased data literacy of Big Data technologies is critical to increasing the adoption of such tools and strengthening the entire “data-to-knowledge-to-action” process. 15 FEDERAL BIG DATA RESEARCH AND DEVELOPMENT STRATEGIC PLAN Strategy 3: Build and enhance research cyberinfrastructure that enables Big Data innovation in support of agency missions State-of-the-art cyberinfrastructure is necessary if Federal agencies are to take advantage of the opportunities that Big Data offers. Innovative advanced cyberinfrastructure will need to combine the 25 powers of Big Data and large-scale computing into a coherent capability for data analysis; address the 26 challenges of data transport at all scales, from on the chip to across the globe; and satisfy the growing 27 need for new environments for data sharing and analytics. Federal agencies have had a long history of supporting research on leading-edge infrastructure from the development of ARPANET in 1969, to the NSFNET and DOE ESnet programs, to the current NSF Global Environment for Network Innovations (GENI) program in computer networking and Extreme Digital (XD) program in high-performance computing. Advancements in research cyberinfrastructure must serve a wide range of Big Data application needs and requirements. At the high end of computing and data, the National Strategic Computing Initiative (NSCI) provides guidance for “accelerating delivery of a capable exascale computing system that integrates hardware and software capability to deliver approximately 100 times the performance of current 10 petaflop systems across a range of applications representing government needs,” with “increasing coherence between the technology base used for modeling and simulation and that used for data analytic computing.” The NSCI is a “whole-of-government effort designed to create a cohesive, multi-agency strategic vision and Federal investment strategy, executed in collaboration with industry 28 and academia, to maximize the benefits of high-performance computing (HPC) for the United States.” In other areas of Big Data, the applications may require very different types of cyberinfrastructure. Examples include highly networked systems, such as the IoT, that may not require exascale computing, but may pose challenges to operating in a highly distributed, parallel system on data stored across deep storage/memory hierarchies, or large memory systems for in-memory operations on extremely large data structure, such as complex graphs. In concert with the NSCI and other related initiatives and activities, a coordinated national strategy is needed to identify the needs and requirements for secure, advanced cyberinfrastructure to support handling and analyzing large amounts of data. Design considerations must include the full range of data scenarios, from large-scale warehoused historical data to data from multiple, concurrent, real-time data streams. Regardless of the type of application, state- of-the-art cyberinfrastructure is essential in a data-driven world, for maintaining global competitiveness in cutting-edge scientific research, promoting a vibrant data-driven industry sector, and fulfilling the public mission of government agencies. Strengthen the National Data Infrastructure Datasets themselves constitute essential infrastructure for Big Data. A key aspect of the Big Data strategy is enabling access to open data, supporting sustained access, and providing controlled access to protected data. In the 1970s, research infrastructures like ARPANET and NSFNET led to the creation of the Internet. In the 1980s, a concerted effort in high-performance computing led to the creation of supercomputer centers at multiple NITRD agencies and research institutions. Today, the need is to significantly enhance national data infrastructure to exploit the full power of Big Data. While there are community-based efforts, such as the Research Data Alliance (RDA) and the National Data System (NDS), that focus on issues related to creating a national (and international) capability, the Big Data SSG is 16 FEDERAL BIG DATA RESEARCH AND DEVELOPMENT STRATEGIC PLAN interested in a coordinated plan for a national data infrastructure that can serve the needs of a wide range of stakeholders. There is a need to standardize access to data resources within and across agencies. The collaborative development of standards and metrics for the entire data-driven cyberinfrastructure pipeline, including the hardware, analytics, data resources, and interfaces with which they interact, will be critical for a well-functioning cyberinfrastructure ecosystem. Several agencies have highlighted the importance of developing data and metadata standards in order to improve interoperability among data resources 29 across organizations (e.g., the NIST ISO/IEC JTC 1 Study Group on Big Data, USGS Modular Science 30 Framework ). An open systems approach and a federated implementation would greatly enhance the 31 ability to share and combine datasets within and among agencies and with the public. Data curation and data elements registration, development of standards, and data sharing and data integration approaches, should involve all relevant stakeholders. The approach should be federated, modular, and extensible in order to allow for agency-specific additions and enhancements. Community-based organizations such as the RDA are engaged in grassroots activities to further the development and adoption of such standards. New standards are also needed for measuring the performance and effectiveness, including price/performance, of Big Data systems. These standards should reflect end-to- end performance for realistic application scenarios, necessarily combining data-intensive and compute- intensive aspects. Initiatives under way include the NSF’s Benchmarks of Realistic Scientific Application 32 Performance of Large-Scale Computing Systems (BRAP) program and the community effort to develop 33 the Big Data Top 100 List. Along with performance, the new metrics should incorporate price/performance and energy performance. A multiplicity of solutions such as shared repositories, federated and virtual approaches, and discoverability systems are needed to share data across disciplines and among agencies. In any field, discovery is enabled by the availability of cyberinfrastructure oriented to Big Data. All Federal agencies with annual research budgets greater than 100 million have developed public access plans to increase access to the results of Federally funded scientific research, including data. DARPA supports the DARPA Open Catalog, which contains a curated list of DARPA-sponsored software and peer-reviewed publications. Many agencies support community data repositories across a wide range of science and engineering disciplines. In many fields, including those represented by the Precision Medicine Initiative and the Materials Genome Initiative, a transition is taking place from the generation of small disparate datasets (the so-called “long-tail data”) to the bundling and integration of these data to enable easier discovery, access, and analysis. Programs such as the NIH BD2K and NSF’s Building Community and 34 Capacity in Data-Intensive Research in Education (BCC-EHR) are vital for ensuring that all communities have access to new resources and analytical techniques to advance their fundamental research. With advances in simulation methodologies and computing power, the validity of results from simulations could equal those from instrumentation. There will be increasing opportunities for integrating simulation data with observational and experimental data, resulting in accelerated progress of data-intensive computational research. The Big Data SSG recognizes the opportunity for a strategic effort in this area for systems and standards that enable easy sharing and use of research, government, and other open data. Empower Advanced Scientific Cyberinfrastructure for Big Data Investment in shared leadership high-performance computing (HPC) resources have been made by Big Data SSG agencies such as NSF and DOE, which have traditionally focused on modeling and simulation applications. Increasingly, however, agencies are also sponsoring high-end HPC systems that provide processing capabilities for data analytics, and are pushing the development of computer system 17 FEDERAL BIG DATA RESEARCH AND DEVELOPMENT STRATEGIC PLAN architectures that can efficiently support high-performance computing for both memory and compute- intensive codes and efficient data access and processing. Among the goals of the NSCI is to promote the development of exascale systems that are also capable of performing data analytics operations efficiently. There is growing recognition that there are significant data issues common to different disciplines and application areas, even while some issues are specific to a discipline or application area. Some aspects of cyberinfrastructure for Big Data may focus on specific application domains, while others are common and shared across multiple research domains. Investments in both categories are critical for supporting the diversity of Big Data innovation. The former is important so that domains with specific and difficult Big Data challenges can be well supported with resources optimized tor those applications; and the latter so that a shared infrastructure can offer access to resources that an individual community alone would not be able to build and sustain. DOE Office of Science (SC) SUPPORTING COLLABORATIVE SCIENCE AROUND THE GLOBE Science is global. Facilities like the Large Hadron Collider, the Advanced Light Source, and the Joint Genome Institute create multi-terabyte to multi-petabyte scale datasets that need to be disseminated and analyzed by scientists Image courtesy of the Department of Energy’s Office of Science. and computing resources around the world. Enabling this experimentation and collaboration requires extremely fast networking speeds. The DOE Office of Science’s Energy Sciences Network (ESnet) seeks to ensure that scientific progress is unconstrained by the location of experimental instruments, people, computational resources, or the size of the data. ESnet can move data at 100 gigabits per second and is engineered to be an international resource for collaborative data-intensive science. ESnet is helping researchers with real-time feedback on experiments using DOE Office of Science light sources. For example, a detector at a light source facility can now capture data from a sample, automatically send it to a supercomputing center to be processed and visualized, and then allow the scientist to access the data from a web portal in near real time. Using ESnet, groups of scientists can get feedback on whether they have taken the right sample or if they have calibrated the experiment appropriately, thereby greatly reducing the time it takes to discover new phenomena. There is a significant opportunity for coupling advanced cyberinfrastructure with physical data sources— whether a telescope, an MRI machine, or instrumentation at border crossings. Such a linked system could provide targeted, efficient, and effective data services and efficient pathways to appropriate computing resources. When developing this type of cyberinfrastructure, it is important to have a comprehensive understanding of the instrumentation that generates the data, have an understanding of processing and computational objectives, and collaborate with the data collection groups and end users of the data. 18 FEDERAL BIG DATA RESEARCH AND DEVELOPMENT STRATEGIC PLAN Address Community Needs with Flexible and Diverse Infrastructure Resources The process of designing and developing cyberinfrastructure benefits greatly when stakeholders and the intended user-base are included. Community involvement is essential when a project requires the design of hardware that is specific to handling a particular type of data or dataset. Early stakeholder involvement optimizes the usefulness of the resulting infrastructure and minimizes unwanted consequences. Robust cyberinfrastructure for Big Data will require real-world deployment of systems using state-of- the-art as well as emerging Big Data technologies. However, enterprise-level systems are costly and require consistent management and maintenance that may be beyond the scope of universities or mid- sized enterprises to support for testing the performance of Big Data techniques. To allow researchers to test the deployment of infrastructure that meets the current and anticipated needs of Big Data communities, there is a need to invest in cyberinfrastructure pilot programs and testbeds for specific research communities and applications that use and analyze data. Such pilot programs would provide researchers with platforms for testing new techniques at scale, across a variety of application domains. Pilot programs may include multi-site and/or multi-year projects to test collaborative infrastructure adequately. Big Data SSG agencies may need to develop new funding models or partnerships to ensure support and resources for researchers participating in pilot programs. Also, a robust transition-to- practice pipeline is an integral part of the research process to implement effective pilots. Cloud computing provides multiple choices for implementation, including use of private clouds and public clouds. Big Data SSG agencies are already exploring multiple implementation models that are appropriate to their respective missions and organizational objectives. One example is the NIH 35 Commons, which is a shared and interoperable computing environment intended to take advantage of both private and public cloud computing platforms with HPC resources. Public clouds are increasingly used as a computing platform by biomedical researchers because they afford a high degree of scalability and flexibility in both the cost and configuration of compute services. The NIH Commons framework focuses on interoperability between resources through common container frameworks and open APIs. The development of common tools and standards will be a critical component of a national data cyberinfrastructure. The Big Data software ecosystem is currently dominated by open-source software packages. Agencies such as the NSA have also made some of their Big Data software open source (e.g., Apache Accumulo™ and Apache NiFi™). Big Data SSG agencies have a commitment to support open- source software development that both reduces costs and produces innovative, high-quality software. 19