Posted on

Artificially Intelligent Cyber Security: Reducing Risk & Complexity

Authors:
John Carbone, Forcepoint LLC
Dr. James Crowder, Colorado Engineering Inc.

Abstract:
Historically, research shows analysis, characterization, and classification of complex heterogeneous non-linear systems and interactions have been difficult to accurately understand and effectively model. Synonymous, exponential growth of Internet of Things (IoT), Cyber Physical Systems, and the litter of current accidental and unscrupulous cyber events portray an ever-challenging security environment wrought with complexity, ambiguity, and non-linearity. Thus, providing significant incentive to industry and academia towards advanced, predictive solutions to reduce persistent global threats. Recent advances in Artificial Intelligence (AI) and Information Theoretic Methods (ITM) are benefitting disciplines struggling with learning from rapidly increasing data volume, velocity, and complexity. Research shows Axiomatic Design (AD) providing design and datum disambiguation for complex systems utilizing information content reduction. Therefore, we propose a comprehensive transdisciplinary AD, AI/ML, ITM, approach combining axiomatic design with advanced, novel, and adaptive machine-based learning techniques. We show how to significantly reduce risks and complexity by improving cyber system adaptiveness, enhancing cyber system learning, and increasing cyber system prediction and insight potential where today context is sorely lacking. We provide an approach for deeper contextual understanding of disjointed cyber events by improving knowledge density (KD) (how much we know about a given event) and knowledge fidelity (KF) (how well do we know) ultimately improving decision mitigation quality and autonomy. We improve classification and understanding of cyber data, reduce system non-linearity and cyber threat risk, thereby, increasing efficiency by reducing labor and system costs, and “peace of mind.”

Full access to this whitepaper requires submitting the form on the following page. Click the link below to continue.

[download id=”26882″]

Posted on

The Systems AI Thinking Process (SATP) for Artificial Intelligent Systems

Authors:
Dr. James Crowder, Systems Fellow, Colorado Engineering Inc.
Dr. Shelli Friess, School of Counseling, Walden University

Abstract:
Previous work has focused on the overall theory of Systems- Level Thinking for artificial intelligent entities in order to understand how to facilitate and manage interactions between artificial intelligent system and humans or other systems. This includes the ability to predict and produce behaviors consistent with the overall mission (duties) of the AI system, how to control the behaviors, and the types of control mechanisms required for self-regulation within an AI entity. Here we advance that work to look at the overall Systems AI Thinking Process (SATP) and the architecture design of self-regulating AI systems-level processes. The overall purpose here is to lay out the initial design and discussion of concepts to create an AI entity capable of Systems-Level thought and processing.

Full access to this whitepaper requires submitting the form on the following page. Click the link below to continue.

[download id=”26876″]

Posted on

Synthetic AI Nervous/Limbic Derived Instances (SANDI)

Authors:
Dr. Shelli Friess, School of Counseling, Walden University
Dr. James A. Crowder, Systems Fellow, Colorado Engineering  Inc.
Dr. Michael Hirsch, President and CTO, ISEA TEK LLC

Abstract:
Artificial feelings and emotions are beginning to play an increasingly important role as mechanisms for facilitating learning in intelligent systems. What is presented here is the theory and architecture for an artificial nervous/limbic system for artificial intelligence entities. Here we borrow from the military concept of operations management and start with a modification of the DoD Observe, Orient, Decide and Act (OODA) loop. We add a machine learning component and adapt this for processing and execution of artificial emotions within an AI cognitive system. Our concept, the Observe, Orient, Decide, Act, and Learn (OODAL) loop makes use of Locus of Control methodologies to determine, during the observe and orient phases, whether the situation constitutes external or internal controls, which will affect the possible decisions, emotions, and actions available to the artificial entity (e.g., robot). We present an adaptation of the partial differential equations that govern human systems, adapted for voltage/current regulation rather than blood/nervous system regulation in humans. Given human trial and error learning, we incorporate a Q-learning component to the system that allows the AI entity to learn from experience whether its emotions and decisions were of benefit or problematic.

Full access to this whitepaper requires submitting the form on the following page. Click the link below to continue.

[download id=”26864″]

Posted on

A Hybrid Cognitive System for Radar Monitoring and Control using the Rasmussen Cognition Model

Authors:
Dr. James Crowder, Systems Fellow, Colorado Engineering Inc.
James Carbone, Department of Electrical and Computer Engineering, Southern Methodist University

Abstract:
The long-term goal of artificial intelligence (AI) is to provide machines the capabilities to learn, think and reason like humans. To achieve these long-term goals, it is necessary to introduce human cognitive-like abilities into AI systems to create truly self-adaptive artificially intelligent systems. This marriage of human cognitive skills with “machines” creates hybrid systems that have characteristics of both. The question becomes, which human cognitive model is appropriate for hybrid artificial intelligent systems. The purpose of this paper is to discuss the development of cognitive models to be infused into a modern radar system to create a Cognitive Radar System (CRS). The notion of a hybrid artificially intelligent system can be divided into two main categories: (a) human-in-the-loop systems with hybrid augmented intelligence requiring human-AI communication/collaboration, and (b) a cognitive computing-based AI in which a fully cognitive model is infused into the machine to allow fully autonomous operation. Here, we discuss the first type, human-in-the-loop cognitive radar systems that provide intelligent decision support and analysis for radar systems. The design of hybrid artificial intelligence methods and algorithms are presented with applications to improvement to modern radar systems, utilizing a Rasmussen Cognition Model (RCM), which we feel is appropriate for a hybrid cognitive system utilized to create a Cognitive Radar System (CRS).

Full access to this whitepaper requires submitting the form on the following page. Click the link below to continue.

[download id=”26857″]

Posted on

Dynamic Heuristics for Surveillance Mission Scheduling with Unmanned Aerial Vehicles in Heterogeneous Environments

Authors:
Dylan Machovec, Department of Electrical and Computer Engineering, Colorado State University, Fort Collins
James A. Crowder, Colorado Engineering Inc.
Howard Jay Siegel, Department of Electrical and Computer Engineering and Department of Computer Science, Colorado State University, Fort Collins
Sudeep Pasricha, Department of Electrical and Computer Engineering and Department of Computer Science, Colorado State University, Fort Collins
Anthony A. Maciejewski, Department of Electrical and Computer Engineering, Colorado State University, Fort Collins

Abstract:
In this study, our focus is on the design of mission scheduling techniques capable of working in dynamic environments with unmanned aerial vehicles, to determine effective mission schedules in real-time. The effectiveness of mission schedules for unmanned aerial vehicles is measured using a surveillance value metric, which incorporates information about the amount and usefulness of information obtained from surveilling targets. We design a set of dynamic heuristic techniques, which are compared and evaluated based on their ability to maximize surveillance value in a wide range of scenarios generated by a randomized model. We consider two comparison heuristics, three value-based heuristics, and a metaheuristic that intelligently switches between the best value-based heuristics. The novel metaheuristic is shown to find effective solutions that are the best on average as all other techniques that we evaluate in all scenarios that we consider.

Full access to this whitepaper requires submitting the form on the following page. Click the link below to continue.

[download id=”26843″]

Posted on

Applications for Intelligent Information Agents (I2As): Learning Agents for Autonomous Space Asset Management (LAASAM)

Authors:
Dr. James Crowder, Raytheon Intelligence and Information Systems
Dr. Lawrence Scally, President and CTO, Colorado Engineering Inc.
Michael Bonato, VP of Program Management, Colorado Engineering Inc.

Abstract:
Current and future space, air, and ground systems will continue to grow in complexity and capability, creating a serious challenge to monitor, maintain, and utilize systems in an ever growing network of assets. The push toward autonomous systems makes this problem doubly hard, requiring that the on-board system contain cognitive skills that can monitor, analyze, diagnose, and
predict behaviors real-time as the system encounters its environment. Described here is a cognitive system of Learning Agents for Autonomous Space Asset Management (LAASAM) that consists of Intelligent Information Agents (I2A) that provide an autonomous Artificially Intelligent System (AIS) with the ability to mimic human reasoning in the way it processes information and develops knowledge [Crowder 2010a, 2010b]. This knowledge takes the form of answering questions and explaining situations that the AIS might encounter. The I2As are persistent software
components, called Cognitive Perceptrons, which perceive, reason, act, and communicate. Presented will be the description, methods, and framework required forCognitive Perceptrons to provide the following abilities to the AIS:

1. Allows the AIS to act on its own behalf;
2. Allows autonomous reasoning, control, and analysis;
3. Allows the Cognitive Perceptrons to filter information and communicate and collaborate with other Cognitive Perceptrons;
4. Allows autonomous control to find and fix problems within the AIS; and
5. Allows the AIS to predict a situation and offer recommend actions, providing automated complex procedures.

A Cognitive Perceptron Upper Ontology will be provided, along with detailed descriptions of the I2A framework required to construct a hybrid system of Cognitive Perceptrons, as well as the Cognitive Perception processing infrastructure and rules architecture. In particular, this paper will present an application of Cognitive Perceptrons to Integrated System Health Management (ISHM), and in particular Condition-Based Health Management (CBHM), to provide the ability to manage and maintain an AIS in utilizing real-time data to prioritize, optimize, maintain, and allocate resources.

Full access to this whitepaper requires submitting the form on the following page. Click the link below to continue.

[download id=”20155″]

Posted on

Implicit Learning in Artificial Intelligent Systems: The Coming Problem of Real, Cognitive AI

Authors:
Dr. James Crowder, Systems Fellow, Colorado Engineering Inc.
Dr. Shelli Friess, LPC, NCC, ACS, School of Counseling, Walden University
Dr. John Carbone, Electrical and Computer Eng., Southern Methodist University

Abstract:
There has been much discussion and research over the last few decades on the differences between implicit and explicit learning and subsequently, the difference between explicit and implicit memories that result from implicit vs. explicit learning. Implicit learning differs from explicit learning in that implicit learning happens through unconscious acquisition of knowledge. Implicit learning represents a fundamental process in overall cognition, stemming from unconscious acquisition of knowledge and skills as a result of an entity interacting with its environment. One of the issues or consequences of implicit learning is the notion of how do we recognize that implicit learning has occurred, how will it affect the overall cognitive functions of the entity, and how do we measure and affect implicit learning within an entity? Here we discuss the notion of self adapting, cognitive, artificial intelligent entities and the notion of implicit learning within the artificial intelligent entity; how this will lead to implicit memories and how they might affect an overall artificial intelligent entity, for better or worse.

Full access to this whitepaper requires submitting the form on the following page. Click the link below to continue.

[download id=”19988″]

Posted on

Anytime Learning: A Step Toward Life-Long AI Machine Learning

Authors:
Dr. James Crowder, Systems Fellow, Colorado Engineering Inc.
Dr. Shelli Friess, LPC, NCC, ACS, School of Counseling, Walden University
Dr. John Carbone, Department of Electrical and Computer Eng., Southern Methodist University

Abstract:
Current machine learning architectures, strategies, and methods are typically static and non-interactive, making them incapable of adapting to changing and/or heterogeneous data environments, either in real-time, or in near-real-time. Typically, in real-time applications, large amounts of disparate data must be processed, learned from, and actionable intelligence provided in terms of recognition of evolving activities. Applications like Rapid Situational Awareness (RSA) used for support of critical systems (e.g., Battlefield Management and Control) require critical analytical assessment and decision support by automatically processing massive and increasingly amounts of data to provide recognition of evolving events, alerts, and providing actionable intelligence to operators and analysts.

Full access to this whitepaper requires submitting the form on the following page. Click the link below to continue.

[download id=”19981″]

Posted on

Systems Level Thinking for Artificial Intelligent Systems

[vc_row][vc_column][vc_column_text]Authors:
Dr. James Crowder, Systems Fellow, Colorado Engineering Inc.
Dr. Shelli Friess, LPC, NCC, ACS, School of Counseling, Walden University

Abstract:
Systems thinking and its perspective have become more and more prevalent in engineering, business, and management. Systems thinking enables systems, people, and/or organizations to study and understand interaction between individuals (or subsystems), departments (or system elements), and/or business units (or legacy systems) within an organization or overall system-of-systems design. The elements considered in systems thinking, then, predict and produce behaviors that are fed back into the overall systems thinking process to produce necessary changes within the organization or system to produce the desired behavior or results. In short, systems thinking seeks to understand how different parts of the system influence one another and influence the entire system. Unlike critical thinking, systems thinking requires many skills to create a holistic view of an entire system and its current and predicted behavior. The purpose of this paper is to begin the discussion and lay out the concepts for systems-level thinking for artificial intelligence.

Full access to this whitepaper requires submitting the form on the following page. Click the link below to continue.

[download id=”19974″][/vc_column_text][/vc_column][/vc_row]

Posted on

Surveillance Mission Planning: Model, Performance Measure, Bi-Objective Analysis, Partial Surveils

Authors:
Ryan Friese, Pacific Northwest National Laboratory (PNNL)
Dr. James Crowder, Colorado Engineering Inc.
Howard Jay Siegel, Elect. and Computer Engineering and Computer Science Depts., Colorado State University
John Carbone, Computer Science Dept., Southern Methodist University

Abstract:
We examine the trade-offs between energy and performance when conducting surveillance mission planning in a multi-vehicle, multi-target, multi-sensor environment. The vehicles are heterogeneous UAVs (unmanned aerial vehicles) that must surveil heterogeneous targets across a geographically distributed area within a given period of time (here, 24 hours). We design a new model for surveilling heterogeneous targets by heterogeneous UAVs. Based on this new model, we define a new system-wide surveillance performance measure that includes the targets surveilled, the number of times each target is surveilled, the UAV used for each surveil, the sensor type on the UAV that is used for each surveil, the priority of each target, and the allowance of partial surveil times. Then we implemented a genetic algorithm (GA) for a bi-objective analysis of energy versus surveillance performance for a set of realistic system parameters. The fitness function for this GA is based on our new model and our new performance measure. We construct a Pareto front of mappings of UAVs and sensors to targets to use to study trade-offs between the two conflicting objectives of maximizing surveillance performance and minimizing energy consumed. We also examine how allowing partial surveils of targets can impact system performance.

Full access to this whitepaper requires submitting the form on the following page. Click the link below to continue.

[download id=”19968″]