## I. INTRODUCTION & MOTIVATION
Purely algorithmic AI, from Predicate Logic [1] to Deep Learning neural nets [2-4], have proven highly effective for static, well-defined, narrow problems [5]. For dynamic, complex challenges, traditional AI becomes too 'brittle' (fails due to inappropriate application), and human insight is necessary to guarantee sound, human-aligned solutions. Solutions built on insufficient insight, can have deep long-lasting, human and economic consequences (e.g. conflict avoidance, war on drugs, pandemics or climate ill-preparedness).
Insight is usually gained (besides randomness and serendipity), by knowing when/where to pose which types of questions, about what topic: that is, by posing 'insightful questions'. This ability thus requires a precise logical and mathematical meaning for the variables $\{when, where, what, which\}$, within well-defined contexts $C$, of human cognitive mindsets.
In this paper, the task of generating insightful questions, uses a framework we call Shannon-Neumann or SN-Logic, to cope with the fundamental concepts in insight-gains (see paper I [8]): built by combining information, probability, uncertainty [6] and utility [7]. This paper is structured as follows:
- In section 1, we discussed algorithmic vs human intelligence, and the purpose of SN-Logic.
- In section 2, we present the two-person (human $H$, AI agent $A_{SN}$ ) cooperative Iterated Questioning (IQ) game's role, from both $H$ 's and $A_{SN}$ 's perspectives
- In section 2.3, we discuss the dynamic drift problem: coping with the changing human understanding of a given complex challenge, using a dynamic optimization process. It's impossible to clearly define a single problem, in complex challenges (e.g. war on drugs) so that they can last for decades
- In sections 3.1-3.2, we discuss SN-Logic's requirements to cope with insight (which involves causality, information, logic, probability, uncertainty and utility) and the spaces over which SN-Logic operates
- In sections 3.3-3.4, we introduce SN-Logic's grammar: semantics + syntax The syntax is used by question generators, to build millions of possible questions
- In section 3.5, we present SN-Logic predicates of two classes: problem difficulty-minimizing, and solution quality-maximizing, used in all inferences
- In section 3.6, we discuss the complexity and scope of SN-Logic, and section 3.7 highlights the distinction between knowledge acquisition (symbolic AI) and cooperative (machine) learning, both present in our AI
- In section 3.8, we introduce the normal form for making $SN$ -inferences, about a question's insightfulness
- In section 4, we introduce the Insight Gain Tensor $\mu$ (when, where, what, which) to select sound inferences, from the many valid normal-form inferences, and measures of insight gains associated to these questions
- In section 5, we illustrate the use of SN-Logic, and we perform a validation test, to show how SN-Logic/IQ-game helps finding a solution path, to a component of a hard real-world solved case (quantum field theory research topic)
## II. TWO-PERSON COOPERATIVE IQ-GAME
### a) IQ-game: Human player perspective
The Iterated Questioning or IQ game, is described in paper I. During a game session, the AI-agent, $A_{SN}$, poses the human player $H$, a question $q \in Q$, it thinks is most insightful, given $H$ 's current cognitive mindset $C(t)$. $H$ then explores it, and reports if it was insightful. These are the game's cooperative policies, both players agree to adopt for each Q&A episode. The game serves several purposes which benefits both players (positive-sum game) [7,9] For the human player, $H$, the IQ-game has the following main roles:
- The IQ-game is a Q&A process that reduces uncertainty and increases information about a specific problem, via a sequence of Q&As. It provides an effective tool, to gain insight on the many aspects of a complex challenge.
- The IQ-game drives a sequential (mostly left-hemispheric) conscious reasoning for solving well-defined (narrow) tasks. This process is mirrored by algorithmic AI. For complex tasks, this process alone fails to deliver full solutions. Conceptual solutions to such problems require the next process: insight-gaining.
- The IQ-game drives a parallel (mostly right-hemispheric) non-conscious process, for gaining insights leading to an 'aha' moment. Largely non-conscious processing can be used, where the first process proves too slow or impossible (task is too broad, ill-defined and complex).
- The IQ-game is driven by dual goals: minimizing obstacles and maximizing solution qualities. The minimizing questions guide $H$ to eliminate or reduce difficulties in the problem, when possible. The maximizing questions guide $H$ to boost specific solution qualities, when constraints allow it. It is a dynamic optimization (changes with $H$ 's understanding). We discuss this process in section 3.4.
- The IQ-game provides a non-brittle reasoning framework, which continuously adapts to the human player $H$ 's cognitive intentions $C$. This mindset $C$ evolves as $H$ 's understanding of the challenge progresses. The IQ-game copes with the framework drift problem (section 2.3).
### b) IQ-game: AI player perspective
For the AI-agent, $A_{SN}$, the IQ-game has these roles:
- The IQ game produces game session episodes, from which the agent $A_{SN}$ can learn via cooperative learning.
- The IQ game ensures the agent remains human-aligned [10], because of the continuous human judgments. What is useful, informative, insightful for a human player $H$, does not necessarily mean the same for $A_{SN}$, even if it starts that way. In the learning process, these values can drift apart, due to many factors. In the IQ game, human valuation is the ultimate arbiter, for the insight value of a question (since any AI short of a full AGI superintelligence, will fail miserably at this task), while SN-Logic estimates the insight values, given $C(t)$.
- The IQ game taps into a most valuable human resource: our collective evidence-based knowledge, undeniably our greatest accomplishment (culture, science, technology).
Note that our collective belief-based human selections are often poor (e.g. who we put in power as our leader). The forces here are complex and evolutionary: desire for control, cognitive biases and herd mentality from the fear of social isolation (e.g. [11]).
These factors are absent in the IQ procedure, since decisions are individual, and based directly on one's own experience of a question's insight, within a very specific cognitive context $C(t)$. It uses direct evidence-based judgment, where $H$ 's main incentive is to make life easier for herself. There are, of course individual variations in the experienced insightfulness of questions, but only stable patterns (across many individuals) are retained in cooperative learning (not presented in this paper).
### c) Framework drift problem
A complex challenge is typically time-evolving, multi-objective, multi-solution, multi-discipline, multi-level and open-ended, making it hard from the start, to clearly define a single problem, even when it is urgent (e.g. a crisis) or critical (e.g. sustainability), or both (e.g. a pandemic)
Instead, there is a drift in the framing of problem and its solutions, as we accumulate new insights about a challenge: a framework drift problem. The drift cannot be handled with a static AI/ML system, focused on a given narrow problem.
The IQ-game, copes with the framework drift, by using an adaptive reasoning framework, and an adaptive cognitive intention $C = \{ \text{framework}, \text{where}, \text{when}, \text{what} \}$
section 3.3-3.4) which tracks the human player $H$ 's current understanding of the conceptual framework. It follows $H$ 's evolving understanding of the challenge, helping the SN-logic suggest the insightful questions, within each context $C$. The IQ-game doesn't define a problem from the start, but instead, let's $H$ describe the
## III. Predicate SN-LOGIC
### a) SN-Logic requirements
Standard Logic Programming (predicate logic) is very effective when making strict deductions, but it cannot cope with the cooperative 2-person IQ-game. The purpose of SN-Logic is to provide an inference engine with the following requirements: it has to be...
- precise (ambiguity-free) semantics axioms
- consistent (contradiction-free) framework within which, all SN-inferences can be made (normal-form inferencing)
- transparent (natural language, no hidden layers)
- explainable (no unjustifiable moves)
- human-aligned (no conflicts of with human cognitive intentions)
- non-brittle able to cope with fundamental concepts related to human-insight: causality (causes of insight), time-dependence (evolving understanding), information, probability, uncertainty (Shannon), utility (von Neumann), and insight (paper I). Brittleness is a common cause of AI failures.
To satisfy these requirements, we need a consistent set of SN-Logic definitions, axioms and rules, to which we now turn.
### b) SN-Logic Spaces
To reason using a predicate logic (such as SN-Logic), the variables $x$ need spaces $X$, to scope the quantification: $\forall x \in X, \exists x \in X$. SN-Logic's concepts are partitioned in six compact concept spaces, over which we can perform inferences (see appendices A-F):
Five vector spaces $\{T, S_D, S_C, S_G, S_S\}$, are used to describe the human player $H$ 's changing cognitive mindset $C(t)$, during the IQ-game. The AI agent, $A_{SN}$, needs to know $C(t)$, because the insightfulness of a question, depends on $H$ 's increasing understanding of the challenge and its possible solutions, as insight is accumulated.
The (tensor product) space $S_A$, of possible conceptual actions (operation x object) provide the raw material to build conceptual solutions.
- Vector space $T$ of exploration stages: vector variable \[when $\in T$ \] describe the current stage when of the exploration cycle. The vector [when] rotates in $T$ over time (appendix A).
- Vector space $S_{D}$ of mental obstacles: vector variable \[where $\in S_{D}$ \] describes where the human player's $H$ difficulties reside. The vector [where] rotates in $S_{D}$ over time while exploring the challenge (appendix B).
- Vector space $S_{C}$ of difficulty causes: vector variable $[what \in S_{C}]$ describes what in the reasoning's framework, is causing $H$ difficulty. The vector [what] rotates in $S_{C}$ over time while exploring the challenge (appendix C).
- Vector space $S_G$ of mental goals: vector variable \[where $\in S_G$ \] describes the solution quality, $H$ intends to improve. The vector [where] rotates in $S_G$ over time while exploring the challenge (appendix D)
- Vector space $S_{S}$ of solution elements: vector variable \[what ∈ $S_{S}$ \] describe what aspect of the solution, $H$ intends to improve. The vector [what] rotates in $S_{S}$ over time while exploring the challenge (appendix E)
- Tensor space of conceptual actions $S_A = O_p \times O_b$: action variable \[which $\equiv$ action $\in S_A$ \] is composed of a mental operation (verb $\in O_p$ ) attached to a target object (noun $\in O_b$ ). Space $S_A$ provides the building-blocks of conceptual solutions. (appendix F).
### c) SN-Grammar: Axioms of Semantics
SN-Logic's role, is to provide guidance for insight-building via a Q&A process: suggesting when/where to pose which questions about what topic. To be used in inferences, the meanings of the parts of speech (variables $\{when, where, what, which\}$ ), and the sentence structure (questions which $\equiv q \in Q$ ), have to be both consistent and precise.
$A_{SN}$ needs a basic grammar (syntax, semantics, vocabulary) to communicate effectively with the human player $H$, in a consistent and precise manner. SN-Logic is based on four consistent (contradiction-free) axioms, to define its semantics precisely (ambiguity-free).
Let the human-player $H$ 's cognitive mindset $C(\text{framework}, p)$ be defined by the current reasoning framework (next section), and three (intention) parameters: $p = \{when = p_1, where = p_2, what = p_3\}$, then:
- (Sem 1) Shannon-informative questions: a question (which) $q(p, \text{action})$, that reduces uncertainty (Shannon entropy) for $H$, who's mindset is $C(\text{framework}, p)$
- (Sem 2) Neumann-useful questions: a question (which) $q(p, \text{action})$, that has a human-aligned (via the 2-person IQ-game) utility, within a mindset $C(\text{framework}, p)$. It helps $H$ make progress towards a solution.
- (Sem 3) $SN$ -insightful questions: question (which) $q(p, action)$ satisfying (Sem 1, Sem 2) is $SN$ -insightful, within a mindset $C(\text{framework}, p)$, otherwise it is $SN$ -insightless.
- (Sem 4) $SN$ -Valid inferences: an inference is SN-valid, if and only if it has the $SN$ normal form (section 3.6)
These SN axioms of semantics, allow the AI to cope with core concepts of causality (causes of insight), dynamics (changing reasoning frames) information, probability, uncertainty [6], utility [7] and insight (paper I). These are necessary components of an insight-boosting AI. The axioms Sem1, Sem2 restrict the form of allowed questions. This constraint is used by a $Q$ -generator of questions $q \in Q$, to which we now turn.
### d) SN-Grammar: Syntax for Dual-Optimization
The cooperative IQ-game is driven by dual-objectives: to minimize the problem's causes of difficulty, and to maximize the solution's quality. The optimization must continuously adapt to $H$ 's understanding of the challenge, over an IQ-game session).
The SN-grammar has a simple syntax, specified for each question class $Q$. All questions $q \in Q$ will fall into two classes $Q = \{Q_{min}, Q_{max}\}$, from two complementary (dual) perspectives: (a) causes of cognitive difficulty (to minimize), (b) qualities of solution (to maximize). Each question class generates many of specific questions, aimed at making insight-gains.
The purpose of SN-Logic is to incrementally boost our insight about solutions, by suggesting when/where to pose which types of questions about what topic, while adapting to a moving target: our current understanding the obstacles in a challenge
The question generator, or $Q$ -gen, of difficulty-minimizing questions, uses a specific syntax for an evolving cognitive mindset $C_{min}(frame, topic, p_1, p_2, p_3)$. There is a lot of freedom in which questions to pose, even at a specific place and time, within a well-defined framework. We select a set of six commonly useful problem-solving questions, to illustrate the procedure.
Q-Gen Syntax: difficulty-minimizing questions $q(p, \text{action}) \in Q_{\text{min}}$
{"algorithm_caption":[],"algorithm_content":\[{"type":"equation_inline","content":"q\_{min1}"},{"type":"text","content":": at what exploration stage are we in now? (specifies when "},{"type":"equation_inline","content":"= p_1 \\in T"},{"type":"text","content":") \\n "},{"type":"equation_inline","content":"q\_{min2}"},{"type":"text","content":": what reasoning frame are we operating in, now? (specifies [frame]) \\n "},{"type":"equation_inline","content":"q\_{min3}"},{"type":"text","content":": what topic in [frame] are we focusing on, now? (specifies [topic]) \\n "},{"type":"equation_inline","content":"q\_{min4}"},{"type":"text","content":": where does the main difficulty reside? (specifies where "},{"type":"equation_inline","content":"= p_2 \\in S_D"},{"type":"text","content":") \\n "},{"type":"equation_inline","content":"q\_{min5}"},{"type":"text","content":": what, more specifically, causes this difficulty? (specifies what "},{"type":"equation_inline","content":"= p_3 \\in S_C"},{"type":"text","content":") \\n "},{"type":"equation_inline","content":"q\_{min6}"},{"type":"text","content":": can you reduce the difficulty (where) and avoid its causes (what), by using these actions? (specifies action "},{"type":"equation_inline","content":"\\in S_A"},{"type":"text","content":" and which "},{"type":"equation_inline","content":"= q\_{min6} \\in Q\_{min}"},{"type":"text","content":") "}\]}
The variable [action] $(\in S_A \equiv O_P \times O_b)$, is a product [verb operation] $(\in O_p)$ x [noun object] $(\in O_b)$ (appendices and section 5).
The [frame] variable, labels the reasoning framework currently being used (e.g. a discipline, a subject, a specialty, a model, a system, a theory, a technology etc.). This framework can change from one exploration stage to the next. It is a moving target, which mirrors our current understanding of a complex challenge.
The [topic] variable, labels a set of items we're focusing on, within \[frame\](e.g. agents, assumptions, bounds, properties, qualities, relations, statements, strategies, tactics, techniques etc.). Typically, [topic] is a tool we use within [frame], to make progress. For a concrete example, see section 5.
Questions $q \in Q_{min}$ are $SN$ -insightful, only if they are $SN$ -informative (axiom Sem 1): they attempt to reduce a maximum possible amount of uncertainty (alternatives, ignorance, options, possibilities), within the context $C_{min}$.
The generator of quality-maximizing questions, uses a specific syntax for an evolving cognitive mindset $C_{max}(frame, topic, p_1, p_2, p_3)$:
Q-Gen Syntax: quality-maximizing questions $q(p, \text{action}) \in Q_{\text{max}}$
{"algorithm_caption":[],"algorithm_content":\[{"type":"equation_inline","content":"q\_{max1}"},{"type":"text","content":": at what exploration stage are we in now? (specifies when "},{"type":"equation_inline","content":"= p_1 \\in T"},{"type":"text","content":") \\n "},{"type":"equation_inline","content":"q\_{max2}"},{"type":"text","content":": what reasoning frame are we operating in, now? (specifies [frame]) \\n "},{"type":"equation_inline","content":"q\_{max3}"},{"type":"text","content":": what topic in [frame] are we focusing on, now? (specifies [topic]) \\n "},{"type":"equation_inline","content":"q\_{max4}"},{"type":"text","content":": where do you need a boost (goal)? (specifies where "},{"type":"equation_inline","content":"= p_2 \\in S_G"},{"type":"text","content":") \\n "},{"type":"equation_inline","content":"q\_{max5}"},{"type":"text","content":": what solution aspect, do you want to focus on? (specifies what "},{"type":"equation_inline","content":"= p_3 \\in S_S"},{"type":"text","content":") \\n "},{"type":"equation_inline","content":"q\_{max6}"},{"type":"text","content":": can you boost your goal (where) and the solution's quality (what), by using these actions? (specifies action "},{"type":"equation_inline","content":"\\in S_A"},{"type":"text","content":" and which "},{"type":"equation_inline","content":"= q\_{max6} \\in Q\_{max}"},{"type":"text","content":") "}\]}
Questions in $Q_{max}$ are SN-insightful, only if they are SN-informative (axiom Sem 1): they attempt to reduce a maximum amount of uncertainty (alternatives, ignorance, options, possibilities), within the context $C_{max}$. They are specificity-boosting questions which reduce uncertainty (Shannon entropy) to increase the solution's quality.
### e) SN-Logic predicates $q(x)$
The SN concept of insight involves notions in information, logic, probability, uncertainty and utility (see paper I). To cope with these, we need a logic with quantifiers for scoping the variables $x$ to specific spaces $X$. In standard predicate logic, a predicate is a function $p$ of a variable $x$, which maps a variable $x \in X$, into the predicate's truth values $\{T, F\}$ [12].
$$
X \to \{T, F \} \text{and} x \in X \to p (x) = T o r F
$$
In SN-Logic, an SN-predicate is a function $q$ of a variable $x$, which maps a variable $x \in X$, into the predicate's insight values {insightful $I^{+}$,insightless $I^{0}$ }.
$$
X \to \{I ^ {+}, I ^ {0} \} \text{and} x \in X \to q (x) = I ^ {+} o r I ^ {0}
$$
In SN-Logic we define the two classes (minimizing, maximizing) of predicates $q(x)$, the mindset parameter $p \in P \equiv \{when, where, what\}$ and the predicate variable 'cognitive action':
- SN-predicate questions $q(p, \text{action}) \in Q_{\min}$, where $p \in P$, action $\in S_A$
- SN-predicate questions $q(p, \text{action}) \in Q_{\text{max}}$, where $p \in P$, action in $S_A$
The parameter $p \in P$ is in the space $P$ of cognitive mindsets $C_{min}$ (framework, $p$ ): the set of $H$ 's intentions, during the IQ-game. The AI needs to know this intent, to make useful cooperative suggestions. The mindset parameter $p$, encodes the type of insight, $H$ wants to boost, at any given time.
### f) SN-Logic Complexity & Scope
SN-Logic only requires concept spaces $\{T, S_D, S_C, S_G, S_Q, O_p, O_b\}$ of very small size $N = \text{Card}(Space) \approx 10^2$ (see appendices).
- Number of distinct cognitive mindsets: $N_{\text{cogn}} = O(\text{Card}(P)) = O(\text{Card}(T) \times \text{Card}(S_D) \times \text{Card}(S_C)) = 10 \times 10 \times 10 = 10^3$
- Number of possible conceptual actions: $N_{\text{acts}} = O( \text{Card}(S_A) ) = O( \text{Card}(O_p) \times O( \text{Card}(O_b) ) = 10^2 \times 10^2 = 10^4$
- Number of possible distinct questions: $N_{\text{ques}} = \operatorname{Card}(Q) = N_{\text{cogn}} \times N_{\text{acts}} = 10^7$ minimizing questions, posed by the $Q_{\text{min}}$ -generator (same for maximizing questions).
These numbers already compare favorably to a typical human problem-solver $H$, working by herself. But the real power of SN-Logic (its scope of applications), comes from the combinatorial possibilities: the possible combinations and permutations of insight-boosting questions, needed to solve each class of challenges:
- Number of combinations: $N_{comb} = 2^{N_{ques}}$
- Number of permutations: $N_{perm} = N_{ques}!$
Thus, the number of distinct classes of challenges SN-Logic can cope with, is effectively infinite ( $N = 10^{7}!$ ), yet, based on a few small, compact concept spaces (cardinality $\approx 10^{2}$ ). In this sense, SN-Logic is economical (Occam's razor).
### g) Symbolic AI (knowledge acquisition) vs Learning
The computed complexity of SN-Logic is a theoretical upper bound, to determine the scope of SN-Logic. In practice the computational cost will be much lower, due to universal constraints (common to all challenge classes), because they are imposed by (mostly) challenge-independent forces:
- causality: universal root causes of cognitive difficulties (e.g. confusion due to ambiguity, indecision due to missing information) and solution quality (e.g. accuracy, adaptability)
- logic: valid inferences with sound semantics
- planning: logically necessary chronology of solution steps
- problem-solving: universal tactics to minimize obstacles (to avoid/reduce), and maximize solution quality (to target/increase/maximize) (e.g. divide-and-conquer, minimize ambiguity, maximize order, simplify)
- information: a question is only informative, if it reduces uncertainty by eliminating alternatives, options, outcomes, possibilities, within a cognitive mindset (intention) $C$, restricting the insightful questions to a manageable subset: $q \in Q^{*}(C) \subset Q$, with $\text{Card}(Q^{*}(C)) < < \text{Card}(Q)$
- utility: a question is only useful, if it helps $H$, overcome obstacles, given a cognitive intention $C$, restricting the insightful questions to a manageable subset: $q \in Q^{*}(C) \subset Q$, with $\text{Card}(Q^{*}(C)) < < \text{Card}(Q)$
These rules impose a lot of structure on the SN-agent's insight grain tensor $\mu (\text{frame},\text{topic},\text{when},\text{where},\text{what},\text{which})$, which is, in its fully general form, a high-dimensional rank-6 tensor, but is in practice, very sparse and decomposable into simpler tensors and convolution kernels.
The structure imposed by the universal (challenge class-independent) constraints, is sufficient to construct factored ('vanilla') tensors $\mu^{*}$ of much lower dimensions and lower rank: knowledge acquisition. A 'flavor' is then learned to fine-tune the tensors to each class of challenge, via cooperative learning (not described in this paper). Given the complexity upper-bounds of SN-Logic, the fine-tuning possibilities are vast.
### h) SN-Logic Normal Form
$A_{SN}$ 's fundamental problem, is to use the IQ-game, to guide a human player $H$, in when and where, to pose which types of questions about what topic, to gain a maximum amount of insight into a complex challenge.
A standard normal form inherencing (analogous to conjunctive and disjunctive normal forms, in digital and predicate logic), is necessary for the AI to cope with the computational complexity of SN-Logic. The AI can efficiently search for predicate variables action $\in S_A$, used as building-blocks for conceptual solutions. Given an evolving inherencing framework (frame, topic), SN-normal forms are the following:
{"algorithm_caption":[],"algorithm_content":\[{"type":"text","content":"SN normal-form for minimizing inferences \\nGiven a minimizing mindset "},{"type":"equation_inline","content":"C\_{min}(frame,topic,p)"},{"type":"text","content":", where \\n "},{"type":"equation_inline","content":"p\\in P = \\{when,where,what\\}"},{"type":"text","content":" \\nif "},{"type":"equation_inline","content":"\\exists"},{"type":"text","content":" action "},{"type":"equation_inline","content":"\\in S_A"},{"type":"text","content":", such that "},{"type":"equation_inline","content":"\\mu\_{min}(frame,"},{"type":"text","content":" topic,p,action) "},{"type":"equation_inline","content":">\\mu\_{crit}"},{"type":"text","content":" then \\n "},{"type":"equation_inline","content":"q(p,\\text{action})\\in Q^{*}_{min}(C_{min})\\subset Q\_{min}"},{"type":"text","content":", and \\n "},{"type":"equation_inline","content":"q(p,\\text{action})"},{"type":"text","content":" is SN-insightful, within "},{"type":"equation_inline","content":"C\_{min}"}\]} {"algorithm_caption":[],"algorithm_content":\[{"type":"text","content":"SN normal-form for maximizing inferences \\nGiven a maximizing mindset "},{"type":"equation_inline","content":"C\_{max}(frame,topic,p)"},{"type":"text","content":", where "},{"type":"equation_inline","content":"p\\in P = \\{\\text{when,where,what}\\}"},{"type":"text","content":" \\nif "},{"type":"equation_inline","content":"\\exists"},{"type":"text","content":" action "},{"type":"equation_inline","content":"\\in S_A"},{"type":"text","content":", such that "},{"type":"equation_inline","content":"\\mu\_{max}(frame,"},{"type":"text","content":" topic,p,action) "},{"type":"equation_inline","content":">\\mu\_{crit}"},{"type":"text","content":" then \\n "},{"type":"equation_inline","content":"q(p,\\mathit{action})\\in Q^{*}_{max}(C_{max})\\subset Q\_{max}"},{"type":"text","content":", and \\n "},{"type":"equation_inline","content":"q(p,\\mathit{action})"},{"type":"text","content":" is SN-insightful, within "},{"type":"equation_inline","content":"C\_{max}"}\]}
The sets $Q^{*}(C)$, are maximum-insight subsets of $Q_{\min}$ or $Q_{\max}$, and $\mu(\text{frame}, \text{topic}, p, \text{action})$ is an insight-gain tensor (discussed shortly) whose insight gains are above a minimum critical cutoff $\mu_{\text{crit}}$. The purpose of an insight-gain cutoff scale is intuitive, but its mathematical justification is outside the scope of this paper, which focuses only on logical validity, and ignores scientific soundness. The cutoff is related to a scale-invariance due to a conformal symmetry, under the renormalization of probabilities (unitarity). Scale-separation is used in quantum field theories [13], but justified by the conformal symmetry [14] of a renormalization group [15].
To perform successful inferences autonomously, the AI agent needs to possess the means of deciding whether a predicate variable action $\in S_A$, leads to insight gains above a minimum lower bound (that is, $action \in S_A^*(C) \subset S_A$ ). The insight-gain tensor provides the SN-agent, the ability to select sound inferences, from a vast number of merely, valid ones (that is, of SN normal-form).
## IV. INSIGHT GAIN TENSORS $\mu$
### a) Need for Insight-Gain Tensors
The AI performs SN normal-form inferences, to suggest insightful questions to explore, given human-targeted insight gains $C(p)$. These 'most insightful' questions, lie in a restricted subspace $Q^{*}(C) = \{Q_{min}^{*}(C_{min}), Q_{max}^{*}(C_{max})\}$, within a large space $Q$, of possible questions ( $Card(Q) = 10^7$ ). Given a current mindset $C(p)$, $A_{SN}$ must find a subspace of questions $Q^{*}(C)$. This is where an insight-gain measure $\mu(p, action)$ (convolution tensors and their kernels, used to restrict searches to optimal sub-spaces) are essential, to make sound inferences (real-world accurate), rather than merely valid ones (SN normal-form inferences). This will be presented elsewhere. For now, we simply discuss general constraints imposed by SN-Logic, on the tensor elements.
### b) Constraints on Insight-Gain Tensors $\mu$
The AI's capacity to generate $SN$ -insightful $I^{+}$ questions, from a vast possibility of insightful $I^{0}$ ones (with actions $\in S_A$ ), resides in the structure a high-dimensional insight-gain tensor $\mu(\text{when, where, what, which}) \equiv \mu(p, \text{action})$, for each challenge class and reasoning frame. So the full rank-7 tensor is actually $\mu(\text{class, frame, topic, } p_1, p_2, p_3, \text{action})$. This function outputs the value $g$ of insight gain associated to exploring a question which $\equiv q(p, \text{action}) \in Q$, where $p \in P$ encodes $H$ 's targeted insight gains. To be useful, the tensor $\mu$ is required to satisfy the following properties:
- $\mu: Cl \times Fr \times P \times S_A \to [0,1]$, where $Cl =$ set of challenge classes, $Fr =$ set of reasoning frameworks (frame+topic), $P = T \times S_1 \times S_2$, $S_A = O_p \times O_b$, $S_1 = S_D$ or $S_G$, and $S_2 = S_C$ or $S_Q$
- it is a measure of insight gain $\mu (\text{class},\text{frame},\text{topic},p,\text{action}) = g\in [0,1]$ (normalized)
- probability of all possible actions with a mindset $p$, must sum to one (unitarity)
- $\mu_{\text{crit}} \in ]0,1[$ (minimum critical insight-gain value $\mu > \mu_{\text{crit}}$ )
- $g = 0$ when $q(p, \text{action})$ is $SN$ -insightless $I^0$, given the mindset $p$
- $g = 1$ when $q(p, \text{action})$ is maximally SN-insightful $I^{+}$, given the mindset $p$
- $\mu$ is initialized by satisfying heuristics from causality, information, logic, planning, problem solving and utility. These constraints provide the initial (challenge class-independent) approximation for $\mu$
- $\mu$ gets optimized (fine-tuned) for specific classes of challenges, by cooperative learning, using the IQ-game's session episodes
## V. VALIDATION TEST: POST-DOC RESEARCHER'S DILEMMA
We can now illustrate how SN-Logic is used, on a real challenge. In the IQ-game, both players (human: $H$, $A_{SN}$ ) agree to use simple cooperative strategies, given $H$ 's current mindset $C$:
(1) $A_{SN}$ suggests its guess at a most insightful question $(q\in Q^{*}(C))$ (2) $H$ reports questions $q$ she actually finds insightful
The game's Q&A session, cycles over each obstacle, encountered within a challenge. Hundreds of such sub-problems may be encountered, to solve a challenge. Usually, the number and nature of these obstacles is unknown ahead of time, in real-world challenges.
For clarity, we use a single, static, not so complex, yet most difficult challenge. The scenario is: a young post-doctoral researcher, $H$, is trying to find a good quantum field topic, to spend her next ten years on. The first few moves (Q&As) of the two-person IQ-game, could proceed as follows:
Q from $A_{SN}$: 'Greetings! What class of challenge are we exploring today (sample which depends on what SN-Logic is being used for):
- To compose something e.g. music, arts, literature, programming
- To compute something e.g. any domain
- To construct something e.g. any domain
- To design something e.g. engineering, technology
- To discover something e.g. science, mathematics
- To govern something e.g. crisis mitigation, leadership
- To invent something e.g. engineering, science, technology
- To manage something e.g. corporate, government
- To optimize something e.g. any domain
- To predict something e.g.any domain
- To reconstruct something e.g. intelligence, inverse problems, sensing
- To solve something e.g. any domain
A by $H$: I want to improve on standard quantum field theory, its a discover class of challenge ([class] = discover).
### 1. Q from$A_{SN}$: Which exploration stage are we in, now?
(AI is using $q_{min1}$ in Q-gen) to specify a current obstacle to minimize the obstacle to explore solution ideas to question a solution idea to verify a solution idea
A by $H$: 'I want to identify the current obstacle'
#### 2. Q from$A_{SN}$: What is our current reasoning framework?
(AI is using $q_{min2}$ in Q-generator)
The framework is composed of a topic and a frame
The topic can be any useful tool we select, for overcoming the obstacle (select the closest match):
actions e.g. activities or behaviors agents e.g. catalysts or inhibitors
limits e.g. lower, upper, extremes computations e.g. algorithms
equations e.g. model or representation laws e.g. laws of quantum physics
procedures e.g. protocols or decision processes e.g. interactions or communications
properties e.g. pattern or symmetry qualities e.g.strengths or weaknesses
relationships e.g. hierarchy or priorities restrictions e.g. constraints or conditions
rules e.g. allowed or forbidden statements e.g. assumptions, conditions or theorems
states e.g. equilibrium or criticality strategies e.g. divide-and-conquer
structures e.g. classes, partitions, sets tactics e.g. explore special cases
techniques e.g. calculation or construction
The reasoning frame is the clearly-defined context, within which topic is being used (select the closest match):
{"code_caption":[],"code_content":[{"type":"text","content":"discipline e.g. molecular biology subject e.g. protein folding context e.g. social revolution environment e.g. location and time event e.g. activity or pander model e.g. just-in-time suppl principle e.g. quantum computi method e.g. optimization network e.g. communication or theory e.g. general relativity specialty e.g. programming system e.g. quantum commun technology e.g. fresh water extrac "}],"code_language":"txt"}
Note SN-logic's non-brittleness: at any given time, the reasoning frame can adapt to any required abstraction level and scope. Such frame changes are typically unpredictable at the start of a real-world challenge.
A by $H$: For my research direction, I want to identify a weakness in quantum field theory (QFT) (so here, [frame] ≡ theory, and [topic] ≡ qualities).
3. Q from$A_{SN}$: What's your main difficulty with the [frame], right now? (select the closest match)
(AI is using $q_{min4}$ from Q-generator + an insight-gain tensor/kernel) {"code_caption":[],"code_content":\[{"type":"text","content":"inability to apply [topic] in/of [frame] \\ninability to compute [topic] in/of [frame] \\ninability to construct [topic] in/of [frame] \\ninability to decide [topic] in/of [frame] \\ninability to evaluate [topic] in/of [frame] \\ninability to exploit [topic] in/of [frame] \\ninability to identify [topic] in/of [frame] \\ninability to select [topic] in/of [frame] \\ninability to simplify [topic] in/of [frame] \\ninability to solve [topic] in/of [frame] \\ninability to understand [topic] in/of [frame] "}\],"code_language":"txt"}
A by $H$: I can't evaluate the weaknesses [topic = qualities] of quantum field theory [frame = theory]
4. Q from $A_{SN}$: 'More specifically, why can't you evaluate the [topic] in [frame]? (select the closest root cause of the difficulty) (AI is using $q_{min5}$ from Q-generator + an insight-gain tensor/kernel) {"code_caption":[],"code_content":\[{"type":"text","content":"missing comparison for [topic] in/of [frame] \\nmissing constraint on [topic] in/of [frame] \\nmissing criterion for [topic] in/of [frame] \\nmissing direction in [topic] in/of [frame] \\nmissing information about [topic] in/of [frame] \\nmissing intuition for [topic] in/of [frame] \\nmissing knowledge of [topic] in/of [frame] \\nmissing metric for [topic] in/of [frame] \\nmissing ranking of [topic] in/of [frame] \\nmissing standard for [topic] in/of [frame] \\nmissing value of [topic] in/of [frame] "}\],"code_language":"txt"}
A by $H$: 'I lack an intuition for the weaknesses of QFT'
5. Q from$A_{SN}$: awesome, so our current obstacle is, our missing intuition for the weaknesses of QFT. Let's try to eliminate this obstacle.
A by $H$: 'Okay, I'm all ears!'
6. Q from$A_{SN}$: Can we gain intuition to evaluate QFT's weakness, by... (explore any question you think is promising, or move-on)
(AI is using $q_{\text{min6}}$ in Q-generator + SN normal-form inferences + insight-gain tensors/kernels) {"code_caption":[],"code_content":[{"type":"text","content":"by exploring: \\nidealized cases of the theory (QFT) \\nsolved cases of the theory (QFT) \\nsimple cases of the theory (QFT) "}],"code_language":"txt"}
{"code_caption":[],"code_content":[{"type":"text","content":"by outlining: consequences (causal) of the theory (QFT) implications (logical) of the theory (QFT) predictions (temporal) of the theory (QFT) tests (experimental) of the theory (QFT) "}],"code_language":"txt"} {"code_caption":[],"code_content":[{"type":"text","content":"by identifying: \\ninconsistent aspects of the theory (QFT) \\nlimitations of the theory (QFT) \\nproblematic aspects of the theory (QFT) \\nuncertain aspects of the theory (QFT) \\nunjustified aspects the of theory (QFT) \\nuntested aspects of the theory (QFT) "}],"code_language":"txt"}
{"code_caption":[],"code_content":[{"type":"text","content":"by looking for: \\nambiguities (imprecision) \\ncontradictions (logical, evidence) \\ncounter-examples (exceedings) \\ndiscrepancies (differences) \\ndogma (cognitive traps) \\nerrors (math, procedures) \\nfalsehoods (logical) \\nflaws (procedure, reasoning) \\ngaps (missing pieces) \\nimplicit assumptions (reasoning) \\nimpossibilities (logical, physical) \\ninaccuracies (scientific, technical) \\nincompatibilities (between two items) \\ninconsistencies (logical) \\nlimitations (scope of applicability) \\nunexplained items (no explanation) \\nunjustified items (lack justification) \\nunsupported items (lack evidence) \\nviolations (law-breaking) \\nweaknesses (logical) "}],"code_language":"txt"}
A from $H$: 'I find some questions quite insightful, because (click on each insightful one, and note the reasons for your record):
I outlined the implications of QFT (e.g. including matrix unitarity), and QFT's experimental tests (e.g. including neutron decay experiments). I found reported incompatibilities (e.g. known violations of the CKM matrix's unitarity [16], in neutron decay experiments [17]). That seems like an interesting research area of quantum field theory, for me.
7. Q from$A_{SN}$: Do you want to identify a new obstacle, now?...
Note: for a complex challenge, limitless combinations of obstacles can be explored in this manner.
This scenario shows how suggested questions from $A_{SN}$, can replicate real-world solutions to obstacles, via a cooperative $Q\mathcal{E}A$ dialog. The researchers do something similar between themselves, early-on, to decide what to work on. But AI's complementary strength, is to cover many exploration paths, which are very often overlooked, yet may be key to quality solutions. This dynamic 'human-AI' interaction would be even more fruitful, in a group brainstorming session, where each member of the team, can select directions to explore and possible answers.
## VI. DISCUSSION
### a) Tensor Construction & Cooperative learning
We mentioned (section 3.7), that insight-gain convolution tensors and kernels, form the bridge between the SN normal form inferencing (SN-validity), and measures of insight (SN-soundness); the bridge between logic (validity) and science (soundness). Initially, the tensors $\mu$ are the AI's 'vanilla' core, then, learned flavors are added to it, via machine learning to optimize the core AI, to distinct challenge classes.
The AI's core will be initialized by heuristics from causality, information, logic, planning, problem-solving, and utility. These apply to all types of challenges. The tensors' added flavor, needs to be learned using cooperative learning via a renormalization procedure, from the IQ-game's episodes. The construction of the insight-gain tensors and cooperative learning will be described in future work.
### b) Conclusion
We presented the foundations of SN-Logic, designed to boost human insight, to help overcome challenges that are hard to deal with, using traditional AI (mainly, predicate logic and deep learning neural nets). This required a logic, capable of coping with the concepts necessary to measure insight-gains: causality (causes of insight gains), dynamics (adaptive reasoning frameworks), information, probability, uncertainty (Shannon) and utility (von Neumann).
In this paper, we presented the following:
- The two-person $(H, A_{SN})$ cooperative IQ-game's role from both $H$ 's and $A_{SN}$ 's perspectives
- The frame drift problem: coping with the changing understanding of a challenge, using a (non-brittle) logic and optimization process, which continuously adapt to the current human understanding and intention
- SN-Logic's requirements to compute insightfulness (which involves causality, information, logic, probability, uncertainty and utility) and the concept spaces over which SN-Logic operates (to scope the quantifiers)
- SN-Logic's grammar: semantics + syntax for posing questions $q \in Q$ from a vast space of potential questions. The syntax is used by a dual question generator ( $q \in Q_{min}, q \in Q_{max}$ ), from which all questions are built ( $N_{ques} = O(10^7)$ )
- SN-Logic predicates of two question classes: problem difficulty-minimizing, and solution quality-maximizing, used in all inferences
- The complexity of SN-Logic, and show it's broad scope and capability of coping with a large number of distinct challenge classes.
- The SN normal-form for making valid inferences, about a question's insightfulness, efficiently within a vast space of possibilities
- Insight Gain Tensors $\mu$ (when, where, what, which) are necessary to select sound inferences (real-world accurate), from a vast (effectively infinite) number of valid ones (those with SN normal-form). $\mu$ measures the human insight gains, associated to questions posed, within their cognitive mindsets $(C_{min}, C_{max})$
- A validation test, to show that SN-Logic can replicate the solution steps, to a real-world solved case (discovery in quantum field theory)
This paper focused solely on logic and validity of SN-inferences. It has not dealt with the equally important issue of scientific soundness and accuracy. We will present the construction of the insight-gain convolution tensors and kernels, and the learned structure (cooperative learning), in future papers.
## VII. APPENDICES
A: Vector Space of Exploration Steps $T$ (sample)
<table><tr><td>Time basis vector: when ≡ p1 ∈ T)</td></tr><tr><td>to identify an obstacle</td></tr><tr><td>to minimize the obstacle</td></tr><tr><td>to explore solution ideas</td></tr><tr><td>to question a solution idea</td></tr><tr><td>to verify a solution idea</td></tr></table>
B: Vector Space of Cognitive Difficulties $S_{D}$ (sample)
<table><tr><td colspan="2">Basis vectors of cognitive obstacles: where ≡ p2 ∈ SD</td></tr><tr><td>inability to classify</td><td>[frame]</td></tr><tr><td>inability to compute</td><td>[frame]</td></tr><tr><td>inability to connect</td><td>[frame]</td></tr><tr><td>inability to construct</td><td>[frame]</td></tr><tr><td>inability to count</td><td>[frame]</td></tr><tr><td>inability to decide</td><td>[frame]</td></tr><tr><td>inability to design</td><td>[frame]</td></tr><tr><td>inability to eliminate</td><td>[frame]</td></tr><tr><td>inability to evaluate</td><td>[frame]</td></tr><tr><td>inability to exploit</td><td>[frame]</td></tr><tr><td>inability to extract</td><td>[frame]</td></tr><tr><td>inability to identify</td><td>[frame]</td></tr><tr><td>inability to interpret</td><td>[frame]</td></tr><tr><td>inability to organize</td><td>[frame]</td></tr><tr><td>inability to perform</td><td>[frame]</td></tr><tr><td>inability to plan</td><td>[frame]</td></tr><tr><td>inability to predict</td><td>[frame]</td></tr><tr><td>inability to rank</td><td>[frame]</td></tr><tr><td>inability to relate</td><td>[frame]</td></tr><tr><td>inability to select</td><td>[frame]</td></tr><tr><td>inability to simplify</td><td>[frame]</td></tr><tr><td>inability to solve</td><td>[frame]</td></tr><tr><td>inability to transform</td><td>[frame]</td></tr><tr><td>inability to verify</td><td>[frame]</td></tr><tr><td>etc.</td><td></td></tr></table>
C: Vector Space of Difficulty Causes $S_{C}$ (sample)
<table><tr><td colspan="2">Basis vectors of causes: what ≡ p3 ∈ SC</td></tr><tr><td>level of abstraction of</td><td>[item]</td></tr><tr><td>level of ambiguity of</td><td>[item]</td></tr><tr><td>level of complexity of</td><td>[item]</td></tr><tr><td>level of dependencies in</td><td>[item]</td></tr><tr><td>level of flaws in</td><td>[item]</td></tr><tr><td>level of fragmentation of</td><td>[item]</td></tr><tr><td>level of implicitness in</td><td>[item]</td></tr><tr><td>level of impracticality of</td><td>[item]</td></tr><tr><td>level of imprecision of</td><td>[item]</td></tr><tr><td>level of incompleteness of</td><td>[item]</td></tr><tr><td>level of inconsistency in</td><td>[item]</td></tr><tr><td>level of indecision about</td><td>[item]</td></tr><tr><td>level of indetermination in</td><td>[item]</td></tr><tr><td>level of inefficiency of</td><td>[item]</td></tr><tr><td>level of insufficiency of</td><td>[item]</td></tr><tr><td>level of uncertainty in</td><td>[item]</td></tr><tr><td>level of unpredictability of</td><td>[item]</td></tr><tr><td>level of weakness of</td><td>[item]</td></tr><tr><td>etc.</td><td></td></tr><tr><td>missing assumption about</td><td>[item]</td></tr><tr><td>missing bounds on</td><td>[item]</td></tr><tr><td>missing capacity for</td><td>[item]</td></tr><tr><td>missing classification of</td><td>[item]</td></tr><tr><td>missing confidence in</td><td>[item]</td></tr><tr><td>missing connections in</td><td>[item]</td></tr><tr><td>missing constraints on</td><td>[item]</td></tr><tr><td>missing evidence for</td><td>[item]</td></tr><tr><td>missing explanation for</td><td>[item]</td></tr><tr><td>missing freedom to</td><td>[item]</td></tr><tr><td>missing information about</td><td>[item]</td></tr><tr><td>missing interpretation of</td><td>[item]</td></tr><tr><td>missing intuition for</td><td>[item]</td></tr><tr><td>missing justification for</td><td>[item]</td></tr><tr><td>missing motivation for</td><td>[item]</td></tr><tr><td>missing organization of</td><td>[item]</td></tr><tr><td>missing representation of</td><td>[item]</td></tr><tr><td>missing restriction on</td><td>[item]</td></tr><tr><td>missing scales in</td><td>[item]</td></tr><tr><td>missing statements in</td><td>[item]</td></tr><tr><td>missing tools for</td><td>[item]</td></tr><tr><td>missing verification of</td><td>[item]</td></tr><tr><td>etc.</td><td></td></tr></table>
D: Vector Space of Mental Goals $S_G$ (sample)
<table><tr><td colspan="2">Basis vectors of cognitive goals: where ≡ p2 ∈ SG</td></tr><tr><td>clarity about the</td><td>[solution item]</td></tr><tr><td>confidence in the</td><td>[solution item]</td></tr><tr><td>construction of the</td><td>[solution item]</td></tr><tr><td>criticism of the</td><td>[solution item]</td></tr><tr><td>exploitation of the</td><td>[solution item]</td></tr><tr><td>imagination for the</td><td>[solution item]</td></tr><tr><td>intuition for the</td><td>[solution item]</td></tr><tr><td>understanding of the</td><td>[solution item]</td></tr><tr><td>etc.</td><td></td></tr></table>
Note: mental goals [where] are intentions one tries to maximize, under constraints.
The vector where $\in S_G$ rotates in $S_G$, with the mindset $C$ about the challenge.
E: Vector Space of Solution Elements $S_{S}$ (sample)
<table><tr><td>Basis vectors of solution elements: what ≡ p3 ∈ Ss</td></tr><tr><td>solution's agents</td></tr><tr><td>solution's cases</td></tr><tr><td>solution's components</td></tr><tr><td>solution's consequences</td></tr><tr><td>solution's constraints</td></tr><tr><td>solution's dimensions</td></tr><tr><td>solution's economy</td></tr><tr><td>solution's efficiency</td></tr><tr><td>solution's effectiveness</td></tr><tr><td>solution's ethics</td></tr><tr><td>solution's form</td></tr><tr><td>solution's framework</td></tr><tr><td>solution's information</td></tr><tr><td>solution's justification</td></tr><tr><td>solution's methods</td></tr><tr><td>solution's plan</td></tr><tr><td>solution's properties</td></tr><tr><td>solution's qualities</td></tr><tr><td>solution's relationships</td></tr><tr><td>solution's requirements</td></tr><tr><td>solution's resources</td></tr><tr><td>solution's restrictions</td></tr><tr><td>solution's space</td></tr><tr><td>solution's statements</td></tr><tr><td>solution's sustainability</td></tr><tr><td>solution's utility</td></tr><tr><td>solution's value</td></tr><tr><td>etc.</td></tr></table>
F: Space of Actions $S_A = O_p \times O_b$ (tiny sample)
<table><tr><td>Conceptual Action Space: operation ∈ Op × object ∈ Ob</td></tr><tr><td>Actions to minimize indecision:
avoiding, comparing, demanding, imposing, evaluating, excluding, justifying, maximizing, minimizing, optimizing, prioritizing, ranking, requiring, selecting, weighing items etc.</td></tr><tr><td>Actions to minimize incomprehension:
classifying, collecting, defining, explaining, exploring, exploiting, decomposing, grouping, imposing, interpreting, isolating, reconstructing, relating, removing, separating items etc.</td></tr><tr><td>Actions to minimize inexperience:
exploring cases, exploring examples, exploring idealisations, exploring simplifications etc.</td></tr><tr><td>Actions to minimize skepticism:
comparing, demanding, excluding, explaining, gathering, imposing, justifying, reasoning, refuting, rejecting, requiring, searching for, testing, verifying items etc.</td></tr><tr><td>Actions to minimize unfamiliarity:
building an analogy, building a model, defining concepts, looking for items, outlining facts</td></tr><tr><td>Actions to maximize ability:
training to abstract, training to eliminate, training to exploit, training to organize, training to perform, training to relate, training to select, training to simplify, training to solve, training to transform etc.</td></tr><tr><td>Actions to maximize clarity:
classifying, connecting, defining, idealizing, ordering, organizing, outlining, reducing, relating, removing, separating, simplifying, summarizing items etc.</td></tr><tr><td>Actions to maximize criticism:
questioning an assumption, questioning a premise, questioning the framework, questioning a representation, questioning the necessity, questioning the sufficiency, questioning a method, questioning a path, questioning a solution, questioning the value etc.</td></tr><tr><td>Actions to maximize exploitation:
using an assumption, using a fact, using a given, using a constraint, using a property, using a relationship, using a restriction, using a statement, using a theorem etc.</td></tr><tr><td>Actions to maximize imagination:
weakening an assumption, weakening a bound, weakening a condition, weakening a constraint, weakening a requirement, weakening a restriction, weakening a rule, weakening a statement etc.</td></tr><tr><td>Actions to maximize intuition:
exploring an analogy, exploring a case, exploring an example, exploring a diagram, exploring a metaphor, exploring a model, exploring a story, exploring a simplification etc.</td></tr></table>
Generating HTML Viewer...
References
17 Cites in Article
Leon Sterling,L,Ehud Shapiro,E (1986). The Art of Prolog: Advanced Programming Techniques.
M Mohri (2018). Foundations of Machine Learning.
Kurt Hornik,Maxwell Stinchcombe,Halbert White (1989). Multilayer feedforward networks are universal approximators.
L Guilhoto (2018). An Overview of Artificial Neural Networks for Mathematicians.
Explore published articles in an immersive Augmented Reality environment. Our platform converts research papers into interactive 3D books, allowing readers to view and interact with content using AR and VR compatible devices.
Your published article is automatically converted into a realistic 3D book. Flip through pages and read research papers in a more engaging and interactive format.
Our website is actively being updated, and changes may occur frequently. Please clear your browser cache if needed. For feedback or error reporting, please email [email protected]
Thank you for connecting with us. We will respond to you shortly.