Potential Paper Topics for 2010 Conferences

With suggested conferences - sessions and authors
(Many suggested authors have not been contacted yet.)

Click on Title to Read Abstract Draft

 

  1. Officer Education: Enhanced Pedagogy via New Technology Distrib Simul & Real Time Aps (ddavis, lmdavis, ward)

  2. Interest Managed Routers Optimize Data Delivered over 10 Gig WANs – I/ITSEC New Technology (tdg, rfl, genew, ddavis)

  3. Advances in GPGPU Computing Acceleration for Simulations – ?Winter Sim? (genew, ddavis, rfl)

  4. Distributed Data Management: Tools Needed for Cloud Computing – I/ITSEC Simulation ( yao, tdg, genew, ward, ddavis)

  5. Physics Based Models in Discrete Element and Agent Based Simulations – Sim Interoperability Workshop (ddavis, rfl, ?jmbarton?)

  6. A “Real Human?”: A/I’s New Assault on Alan Touring’s Old Challenge Distrib Simul & Real Time Aps (ddavis, chang, lmdavis)

  7. Leadership, Strategy and Officer Selection: History’s Lessons Applied to Today’s Technology – Joint Forces Quarterly (ddavis, ?green?, lmdavis)

  8. The Expanding Role of Computer Generated Forces in Resolving Jointness Issues – Joint Forces Quarterly (ddavis, rfl, ?blank?, ?dehncke?, ?cerri?)

  9. Nondisruptive Data Logging:Tools for JFCOM Large-scale Simulations – Simulation Interoperability Workshop (genew, rfl, yao, ddavis)

  10. A Behavioral Science Approach to Evaluating Simulations Distrib Simul & Real Time Aps (ddavis, lmdavis, ?curiel?)

  11. The Future Uses for the GPGPU-Enhanced Cluster at JFCOM – HPCMP UGC (ddavis, rfl, genew)

  12. A Research Administration Course for Academics: A Need Too Long Ignored – Simulation Interoperability Workshop (ddavis, ?walsh?)

  13. Organizing to Best Exploit Academic Excellence in Practical Defense Research – Simulation Interoperability Workshop (ddavis, ?walsh?)

  14. Instantiating MG Robert Scale’s Jedi’s using Appropriate Technology – Armed Forces Journal (?scales?, ddavis, lmdavis)

  15. Evolving Simulations: How Soon Can Humans be Excused From the Battlefield – Sim Interoperability Workshop (ddavis, ward)

  16. GPGPU Programming Courses: Getting the Word Out to the Test and Evaluation Community – ITEA Tech Review (genew, rfl, ddavis)

  17. Requirements Flowing Down or Technological Opportunities Flowing Up: Does History Hold Lessons for Today? – Naval Proceedings (ddavis, ?green?, rfl)

  18. Potential Uses of GPGPU-Enhanced Cluster Computing in MCAE Simulations for Non-linear Mechanical Dynamics – HPCMP UGC (rfl, genew, ddavis)

  19. Training for Globally Dispersed Participants: Using Distributed High Performance Computing – I/ITSEC - Training (ddavis, rfl, genew)

  20. HITL and Metacognition: Self Analysis and Leadership Enhancement During Simulations – Simulation Interoperability Workshop (lmdavis, ddavis, rfl, curiel)

  21. Dwindling Technical Personnel Assets: Designing Responses that Work Simulation Interoperability Workshop (vcgarcia, ddavis, ?jjmoore, sullivan?)

  22. Accelerated HPC Implementations used in the Analysis and Training for Systems Engineering using Mixed Stochastic and Deterministic Simulations CSER (ddavis, DanBurns, jjmoore)

  23. Meeting the Challenge of Educating Globally Dispersed Naval Officers using Systems Engineering as a Core Curriculum I/ITSEC 2011 (DanBurns, ddavis, jjmoore)

  24. Education Technology to Enable Advanced Pedagogy:Implementation with Enhanced Computing Platforms I/ITSEC 2011 (DDavis, Col. Skowran)

  25. Fostering an Enhanced Technical Dialog between Civilian Researchers and the Uniformed Officer Community I/ITSEC 2011 (DDavis, Col. Skowran)


 

Officer Education: A New Milieu via HPCC

Distrib Simul & Real Time Aps

 

The DoD has called for a new era in the education of our Officer Corps. Critics are vocal as to current shortcomings. Pedagogical improvements are putatively an answer to both. Many of the revolutionary ideas, and even some of the evolutionary concepts, could potentially be implemented faster and more effectively utilizing the tools and techniques developed at JFCOM for training, analysis and evaluation. While Education is a significantly different instructional discipline, many of the major technologies would be directly applicable, e.g. GPU accelerated computation, improved feedback monitoring, and sophisticated data structures. The authors review several major goals of officer education, remark briefly on current successes and shortcomings, and then lay out how the extant technologies would be applicable and implementable to serve education. Both emerging pedagogies and major technologies will be covered, including meta-cognitive awareness, collaborative learning environments, low latency/high bandwidth communications, real-time performance-based evaluation, use of avatars to provide environmental richness, risk-free learning environments, distributed data management, opportunities for conflict, internalization, and self-explanation. The authors also assess the potential drawbacks for education in the DoD, based on their experience. Education is considered at the professional level (officer education), the collegiate level and in K-12 schools. These positions are augmented and validated by a survey of and reference to the pedagogical and organizational literature from all areas of education. Then the potential benefits of the technologies are analyzed, focusing on issues of concern to the DoD and on responding to some of challenges coming from the pens of the DoD's most insightful commentators. A defensible development and test timetable is laid out and justified. The paper concludes with several suggestions on how the M&S community might conduct a series of workshops to identify issues, catalog capabilities, cultivate coalitions and seek warfighter support.


 


Interest Managed Routers Optimize Data Delivered over 10 Gig WANs

I/ITSEC New Technology

 

Modern distributed computer systems often are in need of increased network bandwidth. One way to reduce bandwidth needs and optimize data utilization is to replace all-to-all communications with some scheme of interest managed communications. This can be critical for networks that are as diverse as the internal communication networks of a Linux Cluster to a local area network supporting a system to a globally distributed network. Strangely enough, many of these solutions are most sought for to provide communications bandwidth reduction across networks that already have very high bandwidth, low latency characteristics. One area of concern is the inter-node communications provided by proprietary cluster fabrics, such as Myranet® and InfiniBand®. Another is the use of Gigabit Ethernet for the same purpose. A third is the use of the emerging 10 Gig Wide Area Networks (WANs). The authors discuss their experience with all three of these environments, but concentrate on the last of the three to emerge, 10 Gig WANs. In this case, that means Marina del Rey California to Chicago Illinois to Arlington Virginia. Design issues and major goals are discussed. The experimentation plan is laid out, as are changes, and the difficulties in implementing the plan as originally set forth. The data that were collected are portrayed in ways that will make it easy for users facing similar challenges to ascertain the applicability of the reported research. To conclude the report, the results are analyzed, future research possibilities and needs are identified, and a road map for further advances is presented. The value of pursuing this line of inquiry is carefully documented and prerequisites for progress are proffered. The paper ends with an identification of those for whom this work would hold the greatest potential, based on the authors' experience in DoD computer simulations.


 


Advances in GPGPU Computing Acceleration for Simulations

Winter Sim

 

The authors relate their experience in both simulation and the use of General Purpose Graphics Processing Unit (GPGPU) computation acceleration. This paper first documents the history of the use of GPGPUs for acceleration of computers in simulation efforts. Performance data are provided and several areas are outlined in which the use of GPGPUs was theoretically indicated. Then the authors' experience in providing instruction on the use of this technology is reviewed to provide the reader with insights as to how easy or difficult it might be for the users to adopt this technology for their own use or to enable its use in their own organizations. A short code sample is presented with a comparable routine written in FORTRAN to give the user some idea of the complexity of the coding effort required. Then, the current advances in architecture and emerging programming models and language possibilities are surveyed. These advances will take the user through the early use of GPUs which relied on visual programming techniques which frequently used OpenGL and required laborious conceptualization of algorithms as graphics problems. Then, on through the introduction of programming languages such as nVidia's Compute Unified Device Architecture (CUDA) and the growth through the 8800, 9800 and Tesla GPUs. for which fresh performance data is provided. Finally, the most recent achievements, both by the authors and by other groups are outlined. The conclusion section focuses on future uses and potential advantages and difficulties from the simulation practitioner's viewpoint, based on the authors' varied simulation experiences.



Distributed Data Management: Tools Needed for Cloud Computing Tool

I/ITSEC Simulation

 

Every discipline now faces a glut of information and often suffers from a virtually intractable distribution of needed data. While the recent advent Cloud Computing offers many advantages in the ease-of-access to computer assets, it will surely only exacerbate the data conundrum: more data is valuable, but more data is less manageable. With more than a decade of experience in data management of distributed High Performance Computing (HPC) supporting DoD simulations, the authors recount how problems were identified, opportunities were characterized, solutions were designed, and approaches were tested, all of which find their parallels in Cloud Computing. They review the hard-won insights and report the resolution of problems in that context and analyze how these insights and solutions will be applicable to the issues of Cloud Computing. Further, they detail the design parameters of the Scalable Data Grid (SDG) in a way that will enable the user to evaluate the use of this approach to Cloud Computing data issues. Both the general approach and the specific programs are available to DoD users and the solution represented thereby is easily within the grasp of journeyman programmers. The authors will discuss successes of this approach at the Joint Forces Command's Joint Experimentation Directorate. In that instance, the users at the data retrieval consoles were frequently uniformed service members from operational units, i.e. not computer-science-trained professionals, but warfighters. In closing, the authors survey the data issues that their experience indicates will arise in Cloud Computing, review the current literature for early indications of emerging data problems, suggest applicable data management technologies, and recommend future research goals. (Craiged-02Mar; Ke-Thiaed-04Mar)

 



Physics-Based Models in Discrete-Element and Agent-Based Simulations

Sim Interoperability Workshop

 

There has now been created a significant expertise in General Purpose Graphic Processing Unit (GPGPU) computing. This technology is putatively applicable to the issue of rapid computational resolution of issues in real-time infrared sensing and analysis. This paper addresses the contributions of the ISI team to an effort to create a scalable thermal analysis tool for IR signatures in real time. Properly modeling or testing multi and hyper-spectral sensor performance requires accurate background and target signature models that capture the detailed physical processes that combine to produce real-world target signatures. Current approaches to real-time signature prediction, using collections of commercial desktop computers, lack the bandwidth, speed, and capacity to simultaneously account for all of the physical processes involved in heat transfer (radiation, conduction, and convection). Because of the computational bottleneck inherent in such approaches, convection (the most subtle and computationally demanding to model) is normally accounted for by a simple flat plate model, and the flat plate results are then uniformly applied to the 3-D target model as a whole. This results in a loss of fidelity in the predicted target signature. The approach under study has a multi-physics capability that simultaneously accounts for all of the physical processes involved in heat transfer. It will provide real-time, high-fidelity, hyper-spectral target signature data required for presenting scenes to hyper-spectral sensors. The problem is explicated, the design issues are discussed and progress toward the final solution is laid out. Bench-mark and any preliminary performance data are set forth, comparing bench-marking with homogeneous computing with parallel instantiations on homogeneous cluster and then with at least one heterogeneous cluster, using GPGPU programming to achieve virtual homogeneity.



A “Real Human?”: A/I’s New Assault on Alan Touring’s Old Challenge

Distrib Simul & Real Time Aps

 

Since the inception of the computer age, the dream of being able to interact with a computer that is indistinguishable from humans has both intrigued and challenged scientists. The issue is no longer moot for two reasons: increasingly imperative objectives demand more human-like computer interfaces and technical capabilities now provide sufficient compute power to serve the needs of burgeoning sophistication in the behavioral characterizations of humans. The authors, one of whom has been working on this issue since the early 1970s, review the history, survey the discipline, identify a sample of needs, outline the opportunities and justify their analysis. Naturally, they first review Dr. Touring and discuss his conception of the challenge. They discuss their own research on Phase I of the Deep Green project, as an exemplar of current advances. Both the contributions of the gaming industry and the changes that the machine sophistication of this "game obsessed" generation has wrought upon the gamers is reviewed at length. This is the societal segment for which a computer avatar is sought that would be perceived to be indistinguishable from humans. They proceed by setting forth a plan for an extensive system in which any lack of human participants would be detrimental to the social dynamic being envisioned. Any such shortcomings could be overcome by the substitution of a computer-generated participant. They argue that the presence of the computer would best be randomly instantiated, so the "live" participants would never be certain of the non-humanness of their communicative partner and they would not lose focus on the object of the effort. The authors present survey data on current attitudes concerning this issue. They conclude with an outline of future research requirements and possible ethical concerns.

 



Leadership, Strategy and Officer Selection:
History’s Lessons Applied to Today’s Technology

Joint Forces Quarterly

 

Leadership has made the difference in military outcomes and national survival for the entire span of history. Strategic innovation and truly effective implementation of new technologies has been a hall mark of victorious forces since biblical times. Identifying, selecting, empowering and supporting leaders who can provide such innovation and efficacy is an unending task. Technology is now available to facilitate this task and this can represent a revolutionary force in defense management. Advances in High Performance Computing (HPC) technology provide a welcome opportunity to engage learners in real-time authentic contexts most relevant to their desired training outcomes. The authors outline research in cognitive science and education that indicates meaningful learning occurs when education is integrated with frequent opportunities for the participants to 1) apply otherwise abstract knowledge in germane contexts, 2) receive feedback on the success of those applications, and 3) re-engage in the instructional process, having refined the targets for learning. The authors observe that, due to constraints in time, budget, training and material, most learners currently work passively from a textbook or strive to learn in a setting of diminished rigor. They rarely engage in the content at the depth described above, which is necessary for the critical understanding that will enable them to perform well in the varied contexts facing a 21st Century military officer. It is asserted that the DoD could accelerate the learning curve and raise the higher-level cognitive capabilities of its leadership by providing immediate, repeated and user- or variable-influenced simulation experiences. In these, the learners would have to synthesize and apply their developing content knowledge. Further, evidence is presented that the use of HPC would also provide an infrastructure for incorporating the newly honed knowledge of service members who had been in the field and had returned to the instructional setting, thereby informing the training process further. In conclusion, additional research objectives are set forth



The Expanding Role of Computer Generated Forces in
Resolving Jointness Issues

Joint Forces Quarterly

 

The pace of defense structure evolution is now so dizzying that the leisurely approach to developing cross-organizational cohesion is prohibitively expensive in wasted resources, missed opportunities and loss of human life. New leaders are held to need to be proficient in applying operational art to joint warfighting and the joint planning processes. The need to prepare leaders who are uniformly competent in planning operations that integrate and leverage all military and non-military capabilities and who are strategically-minded leaders capable of critical thinking is seen as a daunting task. They must be skilled in aligning and maximizing capabilities across components, services, and agencies, including international forces, as well as imbued with a joint perspective. Their fluency in joint concepts, doctrine, systems, languages and processes are a sine qua non.of success. The sand table/map table exercises of WWII have been replaced by more sophisticated and dynamic representations of the battlespace provided by computers. The authors lay out their experience in providing High Performance Computing capabilities to the Joint Forces Command's Joint Experimentation Directorate (J9), as this activity better defines current advances in computer simulation environments. They then survey the discipline of joint forces management and leadership. The various conundra appearing in that review are identified and the hurdles for resolving them are discussed. Each, in turn, is then analyzed within the context of how large-scale Computer Generated Forces (CGF), simulating operations on virtually limitless synthetic battlespace environments, can better address these issues. Further, the authors present accepted pedagogical approaches to justify their contention that the insights garnered from such computer analyses can best be inculcated in joint force leaders through such a system itself. They adduce practical experience in the Joint Urban Operations (JUO) experiments at J9 to support their theses. They further outline the potential issues that may arise in the future and suggest how such issues may also be most amenable to the approach set forth in this paper. They conclude by identifying the needs for future research and a cost benefit analysis of the application of this approach.



Nondisruptive Data Logging:
Tools for JFCOM Large-scale Simulations

Simulation Interoperability Workshop

 

A persistent series of issues and problems in training, simulation and education are those associated with the efficient and effective collection of information. These issues become exacerbated as the data being generated increasingly outstrips the human capacity to sense, remember and analyze the results of the effort. These shortcomings can occur in several dimensions, including volume of information (data glut), speed of transmission to the analyst (input overload), and geographical dispersion of the data (distributed data). The authors have been wrestling with these issues for more than a decade and present both a theoretical overview and a practical set of lessons learned based on those experiences. A review of the problems and a survey of various approaches to their solution will be presented. One of the issues most problematic is the need to reliably collect all of the germane information that is required or desired by the user or analyst without negatively impacting the training, simulation or education that is the primary focus of the activity. Acknowledging theoretical limits to non-intrusive observation, the goal is held to be: maximize the data and minimize the disruption. The authors lay out with specificity their development of the JLogger system to collect needed information out of the Joint Forces Command's experimentation that typically uses Joint Semi-Automated Forces (JSAF) simulations for analysis and evaluation. Examples of problems faced in achieving the goals of the experiments, the approaches used to resolve these and the solutions developed are all presented with a view toward assisting other similarly-tasked professionals with assessing their needs, their problems and their opportunities. The authors conclude by laying out the way the data were collected, then were structured to optimize their usability by the warfighter participants. Then they look to the future of data collection during live, virtual and constructive events.



A Behavioral Science Approach to Evaluating Simulations

Distrib Simul & Real Time Aps

 

Based on decades of experience in the discipline of agent-based battlespace simulation, the authors assert that the failure of interdisciplinary understanding between the physical science and the behavioral science communities is identifiable as one of the real impediments to more effective utilization of Computer Generated Forces (CGF) by the DoD. They review both the history of this field and their observations of where the lack of real communication between the participating professional communities from independent disciplines has impeded the production of insights that could have made real differences to the warfighter on the battlefield. The dichotomy is exemplified by the differences in the physical scientists' focus on Schrödinger and Heisenberg when discussing the "Observer Effect," while their behavioral scientist colleagues are thinking of the Hawthorne Effect. This exemplar is examined at length and that analysis is augmented by anecdotal evidence from the authors of persistent failures to recognize valuable insights due to interdisciplinary friction and poor communication. Further, the dearth of behavioral scientist in analytical groups is documented by research surveys conducted by the authors. This paucity of pertinent professionals is manifestly injurious when, either otherwise efficacious approaches are ignored or when insufficient understanding of behavioral science perspectives and techniques precludes the adoption of needed analysis. Evidence is advanced that this is injuriously exacerbated when research teams are completely devoid of sophisticated behavioral scientists. Suggestions for overcoming this deficiency in research team composition and enhancing the application of all germane technologies are advanced. Future studies are recommended and alternative paths to DoD research goals are outlined. The DoD researcher reading this article should receive further illumination as to the problem, its impact and practical responses to it.



The Future Uses for the GPGPU-Enhanced Cluster at JFCOM

HPCMP UGC

 

One of the major concerns attached to the Dedicated High performance computing Project Investments (DHPIs) is the durability of the need for which they are awarded. Preferably, such need will long outlast the nominal observation period by HPCMP personnel of the use of the DHPI assets. The authors present the case of the 2007 award of the 256-Node, GPGPU-Enhanced Linux Cluster Joshua, for which they are partly responsible, at least in terms of research agenda. JFCOM has had a continuing need of High Performance Computing (HPC) since the inception of its Joint Concept Development and Experimentation Directorate (J9) and has been the beneficiary of support and awards from HPCMP. J9 use battlespace simulation to fulfill its mission: the development of emerging joint concepts, the conduct of joint experimentation, and the coordination of DoD experimentation efforts in order to provide joint capabilities. The award of Joshua began with the configuration of the system and finally resulted in the transfer of that asset to JFCOM last year. During the waning days of 2009, J9 underwent a significant change in emphasis at the direction of Joint Forces Commander and the Director of Joint Experimentation, so the concerns set forth above were reawakened, at least in the minds of the authors. This paper sets out what these new directives and constraints were, discusses how the new needs differed from the old ones, analyzes how this impacted HPC requirements, presents how the "open and balanced" design of the system made it a effective tool for the new simulation emphasis and records how effectively Joshua is being used today. The authors conclude that this particular DHPI award is still an invaluable asset and hold that their approach to ensuring that it maintains its utility to the warfighter is applicable in other settings, by other HPCMP users

 



A Research Administration Course for Academics:
A Need Too Long Ignored

Simulation Interoperability Workshop

 

Inefficiencies in the administration of research cause immeasurable delays, increased costs and missed opportunities. The authors can call upon decades of experience with research in government, defense contractor and academic settings to support this assertion. They are intimately informed as to the academic training process that produces research managers who are well versed in their science, but woefully weak in management skills. While many of these impediments to optimization may be wedded to intractable human frailties, the authors assert that improvements are not only possible, but mandatory. Drawing from extensive experience with research professionals, they first lay out the manifold ways in which otherwise brilliant researchers are almost universally un-tutored in administrative functions, collegial organization, legal imperatives and financial management. They adduce examples to show how the lack of elementary understanding of these areas did, and will in the future, prove to be disruptive, constraining or fatal to research. The authors then lay out, with specificity, their outline of a course designed to address their understanding of the most important needs in this area. They recount their discussions with government, industry and academic leaders concerning the need for, comprehensiveness of and proper method of presentation of the course developed. The outline is then presented and each section justified as to its need, applicability, and pedagogical goals. The overall goal of the course is to give researchers and graduate degree candidates an understanding of the range of sources of research support, the nature of award process, and the effective management of research, all in order to prepare them to rapidly master the proposal process, organizational techniques, management requirements and the administration of research in a government, academic or contractor environment. The authors close with their current plans for implementation and an overview of their continuing research into this area of great need.



Organizing to Best Exploit Academic Excellence in
Practical Defense Research

Simulation Interoperability Workshop

 

Some issues are precluding the optimal use of available research capabilities. A general lack of understanding by both Academia and by the DoD research consumers is causing the U.S. warfighters to go without capabilities that they would otherwise have available to them. The authors, both on duty in the U.S. Military and in various positions within academia, have recognized the barriers that have been erected and have experienced the sequestration due to "silo-ing" or "stove piping" of academic disciplines. This article takes a very pragmatic approach to delineating the current problem, identifying the hurdles to improvement, describing successful interventions to reduce these problematic issues, defining conditions necessary for success, and discussing future potentially amenable areas that warrant investigation and application. Well recognized management analytical techniques are described and the reports of their application are presented with an eye toward allowing the research manager to make rational decisions as to whether these approaches will bear fruit in their own environments. Included in the discussion is the Heilmeier Catechism approach to problem identification and definition, Logan's "Tribal" Leadership viewpoint, Fred Brooks Mythical Man-Month insights, and Norm Augustine's Laws and other theoretical approaches to these issues. The authors then lay out a set of their own insight-driven rules for enhancing DoD research. They conclude with an analysis of future trends in the defense sector and suggestions for responding to up-coming problems.



Instantiating MG Robert Scale’s Jedi:
Using Appropriate Technology

Armed Forces Journal

 

Being fully in accord with MG Robert Scales' concept for the changes needed to provide for the enhanced education of officers in the United States armed forces, the authors note that several of his concepts may require innovative implementation approaches and several could be greatly enhanced by the use of existing technologies that are not currently being fully exploited. They review the key concepts in the General's seminal article from the Armed Forces Journal (Scales, 2009) and they set forth a series of issues to be addressed, review their previous work that is applicable to this problem and then lay out a series of steps that are desirable in achieving the end-results identified in the referenced paper. The following paper presents the case that education in the 21st Century can only measure up to defense needs if technologies developed in the simulation community, further enhanced by the power of high performance computing, are harnessed to supplant traditional didactic instruction. The authors cite their military credentials and their professional experiences in simulation, high performance computing and pedagogical studies to support their thesis that this implementation is not only required, it is feasible, supportable and affordable. Surveying and reporting on work in computer-aided education, this paper will discuss the pedagogical imperatives for group learning, risk management and surrogates for military mentors who are too often absented by virtue of military operations. All of this can be optimally delivered with the use of current computer technologies. Further, experience and research is adduced to support the thesis that effective implementation of this level of computer aided education is enabled only by, and is largely dependent upon, high performance computing. This is made especially practical and affordable by the ready utility and acceptable costs of Linux clusters.



Evolving Simulations:
How Soon Can Humans be Excused From the Battlefield?

Sim Interoperability Workshop

 

Early in the days of his involvement with agent-based modeling and battlefield simulations, one of the authors got into a heated, but civil, discussion with a retired Major General and a Colonel who had commanded a tank unit. The discussion was sparked by the comment that, as the agent-based model behavior algorithms became more sophisticated, the less needed would become the humans in combat vehicles. Now more than a decade has passed and the authors feel it is time to resurrect that question. New computing power is now available, new control algorithms have been implemented, new social science has refined behaviors, new sensor have extended our view of the work and new acceptance of remotely controlled vehicles has lessened the romantic affinity for the chivalrous ideal of the warrior physically present on the battlefield. Yet, issues remain that inhibit or prevent such a course of action. The authors believe engaging in a discussion of the possibilities and merits of remotely controlled or automatically driven vehicles will have the salutary effect of directing future research and facilitating the adoption of technologies that otherwise might be hindered by old prejudices and new reactionary responses. Current abilities will re surveyed, germane hurdles will be identified and appropriate paths for research will be justified. Alternative outcomes for the various research paths will be analyzed and rigorously evaluated. Failures to accomplish that which had been previously predicted will be strictly presented, to prevent any overly-optimistic view of the imminence of this technology's being fielded. In each case where the future is in any way foreseen, the authors will carefully document their assumptions and comprehensively lay out a range of possibilities.



GPGPU Programming Courses:
Getting the Word Out to the Test and Evaluation Community

ITEA Tech Review

 

Putatively, heterogeneous computing offers significant advantages for many disciplines that depend on or would be amenable to High Performance Computing. A form of heterogeneity that has been demonstrably beneficial is the use of General Purpose Graphics Processing Units (GPGPUs) to accelerate sub-routines that are particularly fitted for processing in the GPUs architecture. Many of these latently accelerable sub-routines are also found in the Test and Evaluation (T&E) environments. The authors will share their extensive experience in implementing similar sub-routines on GPGPUs, quantify their successes and discuss rules of thumb for analyzing the probable responsiveness of other algorithms. They will contrast and compare programming ease of GPGPUs and of the Sony-Toshiba- IBM ( STI) Cell chips. They will report on their survey of the use of heterogeneous computing in the T&E community. Most importantly, though, they will discuss at length the driving forces behind the creation, design, organization and presentation of three introductory courses they have taught, introducing programmers, all of varying levels of experience, to GPGPU programming. They will present sample materials and discuss lessons learned from each course. All of this will be done with the intention of assisting any potential user of this technology to realistically evaluate their own needs, to scope the training required, to identify competent instructors and to implement a course of their own. They will present their case for the benefits of accelerating adoption and effective use of heterogeneous computing by relating their experiences with Joshua, a 256 node GPGPU-Enhanced Linux Cluster at JFCOM in Suffolk, Virginia.



Requirements Flowing Down or Technology Opportunities Flowing Up:
Does History Hold Lessons for Today?

Naval Proceedings

 

Current policy seems focused on restricting DoD research to only those areas in which Combat Commanders have a stated and documented interest. While this responds well to the ostensible need to preclude un-focused research from further shrinking and critically depleting already challenged budgets, it may not be optimal for the warfighter it purports to support. A historical analysis may be illuminating and this paper will present some historical anecdotes, track the trends in defense research and apply an analytical approach as to how best to evaluate research emphases and to direct new research. The history of DARPA will be set down in some detail, as it represents one of the most visible and, by its own charter, the most forward-looking of all DoD research establishments. The potential outcomes of several research paths will be offered and a general outline of the alternative futures under each will be advanced. A brief discussion of the impact of academic disciplines and their own inbred peculiarities will be given. The authors will conclude with their suggested blue-print for use by the next DDR&E.



Potential Uses of GPGPU-Enhanced Cluster Computing in
MCAE Simulations for Non-linear Mechanical Dynamics

HPCMP UGC

 

Newly emerging heterogeneous computing is now making itself useful in Linux clusters in the form of systems with General Purpose Graphics Processing Units (GPGUs) such as nVidia 8800s, 9800s and Teslas. Experience is showing these are effective in addressing many issues that have been problematic for some time. Sparse systems of linear equations have long been computational bottlenecks in applications ranging from science to optimization. For many problems, including Mechanical Computer Aided Engineering (MCAE), iterative methods are unreliable and the performance of sparse matrix factorization is preferable. Multi-frontal sparse matrix factorization is often favored and, by representing the sparse problem as a tree of dense systems, it maps well to modern memory hierarchies. This allows effective use of BLAS-3 dense matrix arithmetic kernels. Graphics Processing Units (GPUs) are architected differently than their general-purpose hosts and have an order-of-magnitude more floating point processing power. These units were largely single-precision when first released, but contain increasing portions of dual precision registers. This paper explores the hypothesis that GPUs can accelerate the speed of a multi-frontal linear solver, even when only processing a small number of the largest frontal matrices. The authors show that GPUs can more than double the throughput of the sparse matrix factorization, when measured in a realistic"end-to-end" performance test. This in turn promises to offer a very cost-effective speedup to many problems in disciplines such as MCAE. This cost effectiveness appears in computer power per watt, reduced hardware footprint, reduced power and cooling costs and sustainable programming investments. Performance data are presented, as well as future research needs and goals.



Training for Globally Dispersed Participants:
Using Distributed High Performance Computing

I/ITSEC - Training

 

Geographical dispersion, always a hallmark of military careers, is even more common and disruptive in the 21st Century. Coincidentally, the need for group training is increasingly demanded by dynamic situations, asymmetric enemies, joint operations and increased public awareness of day-to-day operations by the military. Current attempts to muster and collect dispersed personnel for needed training are problematic and those situations are now exacerbated by the scarcity of travel funds. While many technical aids have been available for some time, the sophistication enabled by the use of distributed High Performance Computing (HPC) is asserted to be critical in the days to come. Based on the decades of military experience, DoD support activities and academic research, the authors set forth and support their thesis that HPC can be effectively applied today to the DoD training needs that otherwise will go unmet or poorly delivered. A quick survey of current activities in this discipline is presented. The paradigm example of this capability is the HPC-Enabled system at JFCOM, as used by the Joint Experimentation Directorate. This system has proven effective, stable and productive in live, virtual and constructive experiments for evaluation and analysis. Its further use for training would be an important step, both for the training itself and for implementing technologies that are more focused on that specific task. The authors lay out experiences in ways that training managers and trainers will find useful in evaluating these currently emerging tools. They discuss real-life experiences and present evidence of reasonable costs, accessible maintenance and available system training in order to better enable trainers to make decisions regarding this evolutionary technology. To conclude, they outline anticipated enhancements to this discipline and opine as to how this may impact training, both of dispersed and congregated personnel and fulfill currently unmet needs.

 



HITL and Metacognition :
Self Analysis and Leadership Enhancement During Simulations

Simulation Interoperability Workshop

 

A key factor in effective leadership is a high degree of metacognition, or awareness of the processes of one's own thinking and the factors and conditions that influence it. This understanding of cognitive filters is critical in rapidly internalizing and effectively achieving and using situation awareness. This paper presents historical anecdotes supporting this assertion. It then goes on to describe Human-In-The-Loop (HITL) experiences at the Joint Forces Command's Joint Concept Development and Experimentation Directorate. It discusses the insights from these experiments, analyzes the metacognitive schema in which the participants are immersed, presents the design of an instrument to investigate those issues, and analyzes preliminary data. This work is compared to other leadership training, both within and outside of the DoD. The authors review the literature supporting the impact of metacognition on leadership and survey previous efforts to incorporate this approach into formal and informal educational settings. This analysis includes a critical review of efficacy of these approaches and considers the future of such programmatic implementations. The authors' experience with live leadership training and with leadership growth during large-scale battlefield simulation experiments is set forth, compared and characterized to help explicate the issues and the opportunities. They relate successes in the intentional development of metacognitive strategies in developing leaders, and demonstrate how the HITL experience is especially effective in this process. The DoD trainer will be provided with both the theoretical underpinnings of this discipline and a pragmatic series of questions that can be posed when evaluating their own situation and needs. The paper concludes with a supportable view of the future requirements and desirable goals of the DoD for continued research into this area of vital interest.

 



Dwindling Technical Personnel Assets:
Designing Responses that Work

Simulation Interoperability Workshop

Despite a widely accepted understanding that there is a decline in technical personnel who are available to the United States defense effort, few responses result in programs that get beyond stage of being content with the appearance of participation. The lack of rigorous evaluation methods hampers the objective analysis of many attempts to attract students to technical work and to facilitate their actual development into technical assets available for used by the defense organizations. The authors rely on decades of experience in the military and intelligence communities, as well as similar periods of time in academia and in defense research to both describe and analyze the problems and survey many of the attempted solutions to these problems. They then lay out programs with a different approach with which they have been associated and recount the hurdles and successes observed in these efforts. They focus on the bases for and the outcomes of attempts to attract more U.S. citizens to technical training, the efforts to keep those students in technical fields and the final effort to steer them into defense work, be it military service or defense research. They review both their own experiences and that of other educators in adducing the evidence of the need of significant changes of policy and approaches, at both national and local levels. Their thesis is that this problem is not any single factor such as education, ethnic isolation, teacher training, recruitment, societal attitudes, retention, or security clearance restrictions; it is an amalgam of all of these and more. They discuss how their current program addresses each of these issues in order. They conclude with the identification of a series of policy issues they see as critical to these problems and project futures likely depending on the adoption of these changes.

 



Accelerated HPC Implementations used in the Analysis and Training for
Systems Engineering using Mixed Stochastic and Deterministic Simulations

Conference on Systems Engineering Research

Systems Engineering is an increasingly vital cross-disciplinary approach to ensuring effective delivery of the performance that is sought from large and sophisticated projects. While deeply dependent on models that must interact with each other, the interfaces between these models, as well as those between the models and the systems engineer, are often problematical. Recent advances in the use of accelerators, such as General Purpose Graphics Processing Units (GPGPUs), Field Programmable Gate Arrays (FPGAs), Processor In Memory (PIM) and the STI Cell chip, have brought hitherto unavailable computational power to the student and to the analyst. This power is both effective and affordable. This paper sets forth the authors' experiences with accelerated Linux clusters in DoD environments and relate these capabilities to their experience as Naval Officers and in managing technical programs. Along with presenting data on the instantiation of the demonstrably successful GPGPU-enhanced cluster at the U.S. Joint Forces Command, they present the impact that this class of computer could have on hybrid models of stochastic and deterministic simulations used in Systems Engineering. They introduce new concepts in the employment of novel analytic techniques for establishing the relative sensitivity of varying parameters within the stochastic simulations, allowing the Systems Engineer to extract useful insights out of reduced data sets. These non-deterministic runs are made up of literally millions of randomly derived inputs. These results can be optimally integrated with the equation-based deterministic models to give the most valid and verifiable results. Comments on the potential impact on training and education are advanced based on the authors' experiences in academemia.

 



Meeting the Challenge of Educating Globally Dispersed Naval Officers using
Systems Engineering as a Core Curriculum

I/ITSEC Education

The selfsame rolling geopolitical and technical environments that make the continuing educational advancement of today’s U.S. Naval officer vital, make it extremely difficult. Operational requirements mandate remote assignments, impose time constraints, disrupt personal relationships and debilitate with stress and fatigue. Based on their observations as Naval Officers and as practicing academics, the authors present their case for the efficacy of using Systems Engineering as a core curriculum for post graduate officer education and for using new implementations of distance education as a partial remedy for the challenges of educating a dispersed student body. They first outline the major characteristics of Systems Engineering and of Naval Officer performance. They then assess the extent to which Systems Engineering and Naval leadership correspond. The benefits of a “whole person” education in addition to the continuing need for specific sensor and weapons systems training is analyzed and specific examples are adduced to support the paper’s thesis. Then, the problems facing the educator and student in the Navy milieu are examined and categorized. This analysis leads to the need for new methodologies for providing an educational opportunity to the deployed officers, one that they can use and that will motivate them, prepare them for advancement, and create the officer corps the nation needs. A brief comparison with other disciplines and their applicability to the Naval environment is advanced. The authors rely on their experiences at the Naval Postgraduate School and the University of Southern California , as well as their own educational histories to enhance this analysis. They close with a survey of purported futures for the Navy and the needs these futures will impose on its officers. They will explicate how the new technologies from NPS and USC , e.g. DREN, HPC , Interactive Computer Aided Education, and Remote Collaborative Learning, can be incorporated to meet these challenges.

 



Education Technology to Enable Advanced Pedagogy:
Implementation with Enhanced Computing Platforms

I/ITSEC Education

This paper focuses on one DoD requirement and two disciplines that are very germane to accomplishing the Warfighter's goals. The authors have hands-on experience with that growing need and are actively engaged in two emerging disciplines that could respond to that need. The need that is addressed here is the requirement to train and educate a widely displaced and manifestly diverse population of service personnel; the technologies are those of nascent pedagogical methods and of education or instructional technology. The areas addressed by this paper are those of the identification of pedagogical techniques and research results that are particularly amenable to a technological implementation and the new technologies, largely in the computer sciences, that are not yet commonly incorporated into the educators' technology toolbox. The authors eschew and decry the tendency of education technology to be limited to recreating the classroom and digitizing the written material. They focus, both in their work and in this paper, on new pedagogical insights as yet not implemented with consistency and old pedagogical insights not implemented because of human limitations, e.g. one teacher and ~30 individual students precludes much individuation in the approach to the material. In technology they focus on systems and applications that are genuinely instructor-friendly, based on their own classroom experience and observation of others in both academic and DoD training environments. The technology assessment is based on the experience of the ISI team with the fiscally and operationally sensible acceleration of both compute and communication systems. They use technologies that have recently been developed and operationally proven to enable large-scale simulation and interactive evaluation events and experimentation. The paper concludes with an analytical look at the future that the thoughtful syntheses of all of these insights and advances might have for both training and education in the military.

 



Fostering an Enhanced Technical Dialog between
Civilian Researchers and the Uniformed Officer Community

I/ITSEC Education

The authors have long been familiar with the close and cordial working relationship between civilian researchers and the military officers with whom they interface, either as Warfighters or as research program managers. However, they also have observed a consistent set of hurdles hampering communications between these two groups. These hurdles are usually overcome by a dint of assiduous application of the communications skills and the intellectual openness of the two communities, but that is not achieved without costs. In this paper, the authors survey and review some of the situations in which they have participated or observed and the costs they have recognized, including delays, missed opportunities, program cancellations, and reduced capabilities. The intention is to be constructively critical and the authors follow their survey by recounting their personal experience with two types of technical personnel exchanges: civilian technical students acting as interns on classified projects at DoD laboratories and military graduate students doing their master's thesis research in a civilian academic setting. After that, the paper focuses on an analysis of what this portends for the future and how it may impact the issues set forth above. The paper closes with a discussion of the road ahead, laying out the steps that may be necessary to foster the sought after enhancement in real communications and identifying the "stake holders" with whom such a program would have to be initiated. All of this is supported by both personal experiences and theoretical foundations from the organizational behavior and communications management disciplines with the goal of better bridging the "what do they want . what can they do for us" communications gap.