Technology – Cognitive Modeling

Basics  Memory  Module Types  Cognitive Modeling  Integration Points  Architecture

Cognitive Modeling with NeurOS

NeurOS facilities enable modeling a wide range of cognitive capabilities at scales from local neuron assemblies through “whole brains”.  As with biological brains, cognitive functions emerge from the synergy of interconnection and interaction among components.

Here is an overview of NeurOS facilities and how they interact.

  • Built-in modules represent basic input/sense, output/action, processing and memory capabilities, similar to groups/assemblies/clusters/layers of neurons.
  • Links allow composition of function into both feed-forward and feed-back directed neural graph flow networks. Links allow commingling of signals from multiple sources, such as needed for domain fusion.   Such composition implements processing pipelines, feedback and resonance, abstraction, self-sustaining activity, the ebb and flow and recombination of neural activity.
  • Sub-graphs of modules and links can operate like distinct functional brain regions.
  • Parameters provide customization of module function. Parameters can be static or dynamic, set to live expressions. Shared parameters loosely model brain function modulation mechanisms like local neuro-chemical concentrations.
  • Memory pattern spaces together with memory pattern modules provide extensive composable long-term memory to model a wide range of memory and learning phenomena.
  • Integration points include external function, wrapper, custom and stream modules to incorporate a wide range of external functionality, and work synergistically with built-in modules.

Plausible approaches and design patterns for modeling a wide range of cognitive functions using NeurOS are surveyed below, in no particular order.  Several demonstration videos show combinations of these approaches, and more are discussed in detail with examples in the BICA 2014 technical paper.

perceptual preprocessing motion tracking perception enhancement
pattern learning and matching abstraction and reification prediction
imagination context classification
synonyms and naming domain fusion learning and teaching
machine learning working memory, resonance “thinking”
attention distraction neural plasticity
meta-cognition behavior plans
language and meaning language generation state modulation
expertise puzzle solving Kahneman’s “Systems 1 and 2”
music time cross-inhibition
clustering anomaly detection

 

  • perceptual preprocessing: Early low-level sensory processing can use either or both of fixed-function computation and memory-based learning:
    • Fixed-function processing typically uses Transformer, Filter and Group modules, and in the future, signal processing modules such as DTFT. These can take advantage of optimized algorithms such as matrix manipulation, and even specialized hardware.   Module internal state and dynamic parameters can implement modest learning. This approach seems most appropriate for high-dimensional dimension-reducing early processing, like low-level sensory (e.g., visual) processing.
    • Processing subject to substantial learning, such as learning a vocabulary (at any level of abstraction) of environmental regularities, is better implemented with memory pattern modules. These can commit and incrementally adjust patterns to match experience, and signal new unmatched patterns. 
      The “Perception” design pattern in the technical paper shows both approaches for simple visual center-surround contrast enhancement.
  • motion tracking: The “Motion Tracking” design pattern in the technical paper shows a very simple example of locating and focusing on a region of maximum change in a small fixed visual field. This uses matrix manipulation to compute differences among successive image frames and to localize the moving region. Motion tracking in a more complex visual system with eye motion and multiple visual fields would involve an open loop graph: detect features to track within a broad visual field, use them to stimulate “muscle” behaviors to move a camera, loop to maintain focus on the movement area.
  • perception enhancement: The “see what we expect to see” usage can help find known patterns in busy noisy scenes. In the “Where’s Willie?” design pattern in the technical paper, imagining the visual features of Willie can combine with actual image input to heighten the recognition of the known image.
  • pattern learning and matching: This is the “bread and butter” of NeurOS memory pattern modules for Set, Sequence and Temporal patterns. What is learned and matched is determined by the range of input event IDs. The generality of link connectivity allows parallel processing of common input fields, mixing and matching of input fields, and cascading of pattern matching. Parameters allow different matching criteria along a range of {any/OR, some, many, most, all/AND}, and enable learning exemplars vs. stereotypes (see classification)
  • abstraction and reification: Abstractions are inherently many-to-few relationships. Pattern memory modules naturally implement abstraction: multiple input features, concurrently, in partial-ordered sequence, or in temporal relationship, create a pattern, which is an abstraction. Patterns of patterns, by cascading multiple pattern memory modules, enable arbitrary layers of abstraction. The Reify module implements the reverse transformation: abstractions down to concrete elements. Another NeurOS approach, for non-learning abstractions like early perceptual processing, can use processing modules like the Transformer, Filter and Group Operations modules. See the “What’s That Tune?” demonstration application and the Perception pre-processing section of the technical paper.
  • prediction: “We see what we expect to see.” A typical sub-graph has input features (at any level of abstraction) feed a Set or Sequence pattern module with a mid-range semantic parameter (some, many, or most), so that a subset of input features stimulates possibly matching patterns. A Filter module and/or a Group Operations module then select the highest confidence pattern(s). These feed a Reify module which generates all the members of the matched patterns, as if they had been experienced. The generated (“imagined”) features may precede receiving actual feature inputs.  (See the “Word and Phrase Recognition” and “Experience Corrective and Predictive Filter” demos.)
  • imagination: Like prediction, imagination uses reification of known patterns to generate features not actually perceived. Commingling such “imagined” features with actual feature input, either through a feedback loop or through multiple links converging on following modules, can lead to interpreting and acting on something partially or totally imaginary. In the limit, stimulation of known patterns without any external input can function similarly. Speculatively, this is what may happen during sleep: multiple patterns may get stimulated randomly, leading to strange juxtapositions of features from the multiple active patterns.
  • context: The characteristics of a current conversation or situation affect our recognition and interpretation of inputs and concepts. The “Context Priming” design pattern in the technical paper shows the use of context to disambiguate words. A context is a (learned) Set pattern with a “many” semantic. It collects words commonly heard in the same conversation. A meaning is a Set pattern that combines a word and a context. An ambiguous word like “beat” would feed two or more such meaning patterns, one for each context, such as “sports” or “music”. As words arrive, a current context is established and sustained with a Working Memory module, but fades with disuse. When an ambiguous word like “beat” arrives, the strongest interpretation, courtesy of the strongest currently active context, leads to the correct meaning: “win a game” vs. “tempo”.
  • classification: NeurOS enables a loose multi-hierarchical non-exclusive classification architecture more typical of human thinking. An arriving feature set may stimulate multiple exemplars and stereotypes at multiple levels of generality. Pattern modules offer a new-pattern threshold parameter. If set high, new feature patterns not closely matching any previously learned pattern in a pattern space produce a new exemplar pattern. If set lower, new feature patterns adjust feature weights of existing somewhat-matching patterns, creating various levels of stereotypes. The “Classification” demonstration video shows this model.
  • synonyms and naming: Most generally, synonyms are highly similar aspects of a thing or concept. Synonyms can be words, but more generally, names, phrases, sounds, images, or other abstractions. The NeurOS Set module with a conjunctive/any/OR setting of its matching semantic parameter directly implements a synonym semantic. Multiple roughly concurrent features at its inputs can form a synonym set, which can accrete other synonyms over time through additional concurrency. Arrival of any member of the set activates the set, which can be used in a variety of ways, including reification to generate all the known synonyms. Several design patterns and demonstration applications use this construct.
  • domain fusion: Links from multiple paths through a neural graph can converge on an input port of a module. This commingles activity from different sources, such as multiple sensory domains, or mixing current experience with imagination feedback. Thus, Set, Sequence and Temporal memory pattern modules can create and match patterns across domains as easily as within domains.
  • learning and teaching: The NeurOS long-term memory pattern modules do the bulk of the learning, of new patterns and adjustments to known patterns. By themselves, learning is unsupervised, with the specificity or generality of learned patterns governed by a known-pattern-matching threshold. Supervised learning, as in the “What’s That Tune?” demonstration example, associates a name or other concept or symbol as a synonym Set pattern with a newly learned pattern. Learning is generally incremental, governed by learning profile and rate parameters. Teaching, as in the “Teaching” design pattern in the technical paper, uses separate pattern modules sharing the same memory pattern space. One path, for continuing experiential learning, has a low learning rate for gradual adjustments. The other path, for trusted teacher input, has a high learning rate, for expected “good example” input patterns. The same effect can be achieved with a single pattern module, by changing the learning rate parameter dynamically, low for normal experience, high for “teaching moments”.
  • machine learning: Machine learning approaches are often categorized as unsupervised, supervised and reinforcement learning.
    • In unsupervised learning, a system develops and adjusts multiple categories or clusters based on similarities and differences among features of individual data points. NeurOS native Set, Sequence and Temporal pattern modules directly implement unsupervised learning. New patterns are created for sufficiently unique feature combinations, and matching patterns may be incrementally adjusted to new similar experiences. Classification systems of many kinds are easily created using neural circuits combining such pattern modules. Especially, flexible multi-hierarchical classification systems of exemplar and stereotype patterns, similar to human learning and knowledge representation, are easily built. See the “Classification: Exemplars and Stereotypes?” demonstration example
    • In supervised learning, “correct” answer labels are provided along with individual data point features. Weights or probabilities of connections among neural elements are adjusted until the system emits mostly correct answers to input features. The “What’s That Tune?” demonstration example shows one way to assemble supervised learning in NeurOS. Exemplar and/or stereotype feature patterns are learned as in unsupervised learning, and the “correct” answer label is associated with each such pattern as a synonym.
    • In reinforcement learning, a reward results from reaching a goal, but no external guidance is provided for choices of actions to reach the goal. In NeurOS, a path to a goal is naturally represented by a Sequence pattern (or a Temporal Sequence pattern if fine timing control is needed, as in playing music or coordinating muscles). Hierarchies of sub-sequences of actions are learned by experience in an evolution-like way similar to how young children learn: try something, observe results. If something interesting happens (a reward), repetition increases the salience (durability) of the combination, adding it to the available repertoire of sub-sequences available to try in the future.
  • working memory, resonance: The built-in Working Memory module provides convenient general-purpose parameterized control over persistence/repetition of activations without needing to build feedback loops. Momentary signals can be repeated according to a decay profile such as exponential, gaussian or uniform distributions. More extensive and complex resonance concepts can be implemented with feedback links. Care must be taken to ensure that the resonance terminates, usually through signal gains less than 1 to dampen the loop and/or new input processing to move the resonance to a new mode. More complex meta-cognition paths can dynamically adjust the gain on feedback paths via dynamic parameters or using a Set module as a gate.
  • “thinking”: The “Thinking” design pattern in the technical paper shows free associational chaining through known memory patterns via a loop of a pattern module and a Reify module. External inputs stimulate a known pattern, which is reified into its components, which subsequently combine to activate other patterns which are further reified into their components. A gain factor less than 1 keeps the loop from going out of control. New inputs can re-energize the loop and change the course of chaining through known patterns.
  • attention: The essence of attention is being able to strengthen some stimuli or internal processes and perhaps weaken others. These adjustments then can heighten or dampen consequent activities. Two different NeurOS design approaches can have this effect:
    • A Set pattern module can serve a “gating” function. If a Set pattern module’s semantic matching parameter is set to require “some” or “many” or “most” inputs to activate a pattern, then one high-weight input signal can be used for gating: a high strength on that input enables other inputs to stimulate the pattern, while a low or negative strength can inhibit activation.
    • Many module types have a numeric “gain” parameter that multiplies values of sent or received events. This parameter can be set to an expression involving other parameters, for example, analogs of mental or emotional state. See the “State Modulation” design pattern in the technical paper, where a perception of “danger” turns up the gain on a perceptual path and turns down the gain on higher-order abstract “thinking” processes.
  • distraction: Distractions can delay the arrival or diminish the strengths of important feature signals at the inputs to pattern recognition modules, diminishing and/or delaying the recogntion. The “Concentration and Distraction” design pattern in the technical paper shows this effect.
  • neural plasticity: The equivalents of creating new synapses and creating new neurons are handled implicitly by NeurOS memory pattern modules. If a new feature co-occurs with familiar feature combinations, any strongly matched pattern will be adjusted, based on a learning rate parameter, to include the new feature. Likewise, older features no longer experienced have their weights diminished. Distinct-enough feature combinations cause new patterns to be created. This models both the commitment of an existing neuron to a new pattern and the creation of a new neuron. Finally, neural graphs can be modified dynamically, while in operation. Although this has a primary benefit during application development, it can be exploited (with care) to “grow new brain circuits” dynamically: meta-cognitive processes monitoring cognitive system operation can conceivably “wire in” new module and sub-graph instances.
  • meta-cognition: From the point of view of NeurOS, meta-cognition is “just more circuitry” that watches activity at various points in a neural graph and perhaps exerts some control. Here’s one tiny example. Two cascaded Group modules can compute a moving double-integral of activity on an output port, giving a measure of how “hard” a particular subsystem (like a motor or battery) has been working. A following Filter and Transformer module can determine when this level exceeds a threshold (which itself can be a dynamic parameter!) and change a dynamic parameter, for example, to dampen some activity or activate a “find a power outlet” behavior. Another tiny example is to use recognition of a familiar person (a signal emitted from a pattern module) to feed a Transformer to adjust a learning-rate parameter: up for a favorite teacher, down for someone we don’t trust.
  • behavior: A simple model of behavior in NeurOS is a collection of concrete to abstract skills. Each skill is a synonym (any/disjunction/OR) Set or a binary Sequence of preconditions and actions. A precondition is typically a conjunctive Set pattern of observable features and explicit intentional actions from other skills. An actions is a (partially-ordered) Sequence pattern of stimuli, either atomic actions or preconditions to other skills. Behavior execution uses a feedback loop. Once the preconditions of one (or more) skill(s) are satisfied (many/most/all semantics), the skill synonym/sequence is activated. When reified and filtered for just the actions, the actions are further reified, leading to atomic actions (e.g., motions) or stimulating other skills on the next feedback loop iteration. Such a loose federation of skills enables flexible responses to changing and unexpected conditions. Skills are learned in the same way as other patterns: concurrence or sequence of events.
  • plans: A plan in terms of the above behavior model is little more than just another skill whose actions are enabling signals to other skills for steps of the plan.
  • expertise: Kahneman defines expertise in a domain as a large collection of “miniskills” capturing predictable regularities in the domain. In NeurOS terms, a skill can be modeled as a Set or Sequence of preconditions and actions. Preconditions are typically a Set or Sequence pattern matching observable features or signals from other skills to activate the skill. These may include patterns for the expected or desired outcome of executing the skill. Actions perform the effects of the skill, changing the external environment, further internal pattern activations (state changes), and/or stimuli (preconditions) to other skills. Looping neural graph topologies cascade execution of skills, with results of one skill enabling other skills in a flexible responsive mesh. These skills are learned from experience.
  • language and meaning: This is, of course, a complex space, so we can only indicate broad research direction. In NeurOS, a natural representation of language and meaning involves converging paths from sensory inputs and layers of recombinant abstraction. Set and Sequence patterns record layers of experienced symbols, vocabulary, phrasing. Set patterns with an any/disjunction/OR semantic provide synonyms at any level of abstraction (e.g., multiple letter combinations making the same sound, multiple spellings for a word, multiple phonemes for a word’s pronunciation, multiple words for a concept, alternate phrasings, ). New inputs are continually “coin sorted” into this mesh, with new combinations are learned by pattern modules at any abstraction level.
  • language generation – “In your own words”: Adopting the coarse “language and meaning” architecture suggested above, generating language from an abstract concept involves a reverse trip generally “downward” through layers of patterns. The Reify module does most of the work here. For example, reifying a synonym pattern would generate all the known words for a concept, with the most frequent/important ones (in the system’s experience) having the highest strengths. The trees of possibilities can be pruned with Filter and Group modules. As one’s vocabulary grows through experience, the reification can generate different expressions. (Clearly there is a lot more to say about style and expression affecting such pruning, as well as grammar, spelling, vocal tract modeling, etc. – this is just the bare bones of an approach.)
  • state modulation: The “State Modulation” usage pattern in the technical paper shows how a shared dynamic parameter can be used to implement broad adjustments to attention and focus. A pattern module recognizing “danger” adjusts a shared parameter which is used to compute the gain parameters of different modules, for example, to heighten awareness and dampen abstract thinking processes in the presence of danger, and reverse the effect when danger passes.
  • puzzle solving: The “Crossword Puzzles” and “Anagrams” design patterns in the technical paper begin to scratch the surface of puzzle solving activity. Both use the auto-associative partial-pattern matching capabilities of pattern modules to combine clues or other inputs with known patterns to generate possibilities, often in parallel. Plausible future modeling directions include a) iterative looping to adjust matching parameters to generate less likely candidate answers, and b) reification of Sequence patterns containing successive steps of a solving strategy.
  • Kahneman’s “Systems 1 and 2”: Daniel Kahneman postulates two cooperating “thinking” systems. System 1 is fast, parallel, intuitive, based on pattern recognition. System 2 is slow, serial, deliberative, and based on inference. The two cooperate in various ways. The “Cousins” example application video demonstrates a simple version of this. System 1 is built with pattern modules, which quickly activate patterns matching current inputs in parallel. System 2 uses a Reify module and a feedback loop. The Reify module generates successive inference steps feeding System 1 patterns, yielding successive pattern matches of intermediate results. When a result set of patterns finally appears, it can be recaptured as a new memory pattern, to allow System 1 to more quickly answer the same question the next time.
  • music: Various aspects of music can be represented directly as Set, Sequence and Temporal patterns. A Set pattern can represent a chord or the frequency power signature of an instrument. A Sequence pattern can represent a tempo- style- and/or key-independent melody including multiple voices and harmony. (See the “What’s That Tune?” demonstration.) A Temporal pattern can represent arbitrarily precise details with variable temporal matching tolerance. All these kinds of patterns can be used in concert J: The “What’s That Tune? Enhanced Version” demonstration application in the technical paper records a melody as originally heard in a Temporal pattern for later replay, and an abstract-invariant encoding as a Sequence pattern for speed/style/key/mistake tolerant melody matching. You play enough of a melody to be recognized, and it plays the whole melody as originally learned.
  • time: Neuroscience research has identified various brain mechanisms for representing, learning and recognizing time intervals and time patterns and deviations. NeurOS provides several mechanisms for handling time:
    • Every signal event in a neural graph has a time stamp. Processing modules have access to the time stamp, including the Transformer and Filter modules, external programs accessed via a Wrapper module, and Custom modules. These modules are free to implement their own time-related algorithms (subject to NeurOS computing rules).
    • The NeurOS Intervals module quantizes incoming event time intervals according to interval quanta parameters. If, for example, interval quanta include 50 and 100 msec intervals, a 70 msec event time interval for a signal may yield two events: (t, id_50, 0.6) and (t, id_100, 0.4) indicating relative stimulation strengths of the two neighboring intervals. This is similar, for example, to frequency quantization performed in biological auditory systems. Together with other modules (e.g., Filter), one can build a bank of “temporal notch filters” to create a “temporal signature” of a signal. Such timing “circuits” can be shared among multiple domains (e.g., sense modalities, behaviors) or replicated as needed.
    • The NeurOS Temporal Pattern module enables learning of temporal patterns over one or many input signals. (Think of music and fine motor coordination.) Matching of new inputs to learned temporal patterns may include parameterized timing and value tolerances, to mimic a variety of biological phenomena.
  • cross-inhibition: There is some neuroscience evidence of cross-inhibition or cross-suppression among neural circuits: stronger activity on some neurons may diminish the activity of competing neurons. This has the net effect of increasing the contrast (relative activity) among alternative recognitions or interpretations of stimuli from a common input field. In NeurOS two mechanisms are available to model cross-inhibition: explicit and implicit. In explicit cross-inhibition, individual memory patterns (sets, sequences, temporal patterns) may include other competing patterns as members, with negative weights, creating inhibitory local feedback paths. Initial activations of similar patterns feed back inhibitory signals to each other and the collection of competing patterns settle quickly with the most likely pattern retaining the strongest activation. Implicit cross-inhibition on the other hand does not require explicit cross-pattern membership. Instead, if a pattern matching mode parameter is set to “exclusive”, a pattern matching score is diminished by the strengths of input features not part of the pattern. Thus, a “meow” sound might diminish the strength of a “dog” pattern which of course would not include a “meow” feature.
  • clustering: Unsupervised clustering is easily achieved using a single NeurOS (set, sequence or temporal) memory pattern module. Unlike the popular K-means clustering algorithm, it is not necessary to pre-define the number of desired clusters: the clusters emerge from similarities among the input data (feature) points. A simple neural circuit first scales raw input data/feature values to the (0,1) range of NeurOS signals (modeling neuron spinking rates from 0 to the maximum possible spiking rate. The resulting feature signals feed a single pattern module managing a NeurOS memory pattern space that will contain the multiple cluster patterns. If an incoming feature set (data point) matches any (previously learned) memory pattern well enough, the current input feature values are used to update the weights and value mean/spreads of the pattern’s features to accommodate the new data point. If the current data point fails to match any previously learned cluster pattern well enough, according to a “novelty” threshold parameter, a new pattern (new cluster) is created for the new data point. The novelty threshold controls how fine or coarse-grained the clusters are. The “out” output port from the memory pattern module delivers signals with values corresponding to the possibly multiple cluster pattern matches; the largest of these is the most likely cluster for the current input. The “new” output from the memory pattern module may be used further to identify when some novel or anomalous input pattern has been encountered.
  • anomaly detection: An anomaly is essentially a combination (concurrent set, sequence, temporal sequence) of features/signals that does not match any previously learned pattern. An anomaly may be a source of alerts to an unfamiliar situation and/or an opportunity to learn something new. In NeurOS, anomalies are part of the basic learning fabric. The Set, Sequence and Temporal long-term memory pattern modules offer a “new” output port. Whenever a feature combination arrives at such a module’s input port, if it does not match any previously learned pattern in the module’s memory pattern space well enough (that is no pattern matching score exceeds the module’s new pattern (novelty) threshold), then a new pattern is created for the combination, and the pattern (it’s signal identifier) is sent out the module’s “new” port, to be used in subsequent processing of new patterns. A more comprehensive usage, determining novelty across many memory spaces, is also easily constructed as a simple neural circuit. Input features are fed in parallel to one or more memory pattern modules. Matching scores of all relevant patterns are emitted on each module’s “out”put port. Merging all these signals at the “in”put port of a Group Operations module allows it to select the strongest matching signal. A subsequent Transformer module can subtract this strongest match from 1.0 to produce a signal that is strongest for newest most unfamiliar patterns.
  •  

     

    Basics  Memory  Module Types  Cognitive Modeling  Integration Points  Architecture

Leave a Reply

Your email address will not be published. Required fields are marked *