论文标题

逻辑神经网络

Logical Neural Networks

论文作者

Riegel, Ryan, Gray, Alexander, Luus, Francois, Khan, Naweed, Makondo, Ndivhuwo, Akhalwaya, Ismail Yunus, Qian, Haifeng, Fagin, Ronald, Barahona, Francisco, Sharma, Udit, Ikbal, Shajith, Karanam, Hima, Neelam, Sumit, Likhyani, Ankita, Srivastava, Santosh

论文摘要

我们提出了一个新颖的框架,无缝地提供了神经网(学习)和符号逻辑(知识和推理)的关键特性。每个神经元在加权实价逻辑中具有公式的组成部分的含义,产生了高度可抑制的分离表示。推理是全向的,而不是集中在预定义的目标变量上,并且对应于逻辑推理,包括经典的一阶逻辑定理,证明是一种特殊情况。该模型是端到端的可区分,并且学习最大程度地减少了一种新的损失函数,从而捕获了逻辑矛盾,从而产生了对不一致的知识的韧性。它还通过维持可以具有概率语义的真实价值的界限来实现开放世界的假设,从而产生对不完整知识的韧性。

We propose a novel framework seamlessly providing key properties of both neural nets (learning) and symbolic logic (knowledge and reasoning). Every neuron has a meaning as a component of a formula in a weighted real-valued logic, yielding a highly intepretable disentangled representation. Inference is omnidirectional rather than focused on predefined target variables, and corresponds to logical reasoning, including classical first-order logic theorem proving as a special case. The model is end-to-end differentiable, and learning minimizes a novel loss function capturing logical contradiction, yielding resilience to inconsistent knowledge. It also enables the open-world assumption by maintaining bounds on truth values which can have probabilistic semantics, yielding resilience to incomplete knowledge.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源