AI最终会产生意识吗?
最近我读了 Alexander Lerchner(来自 Google DeepMind)的一篇有意思的论文:《The Abstraction Fallacy》(抽象的谬误)。文章的核心论点是:AI 不可能拥有意识,因为计算本质上依赖于一个”绘图者”(map-maker)。我们可以完美地绘制整个世界,甚至完美地绘制整个大脑,但这并不意味着这张地图本身就能成为大脑、或者拥有人类意识。我理解他的直觉是:功能等价不等于本体等价(functional equivalence is not ontological equivalence)。
这个直觉很有力,也让我进一步追问下去。
设想这样一种情形:某一天,人类文明识别出了一种”模式 Z”——也就是产生意识的那种特定的物理或动力学条件。我的想法是:任何关于模式 Z 的描述,都必须借助人类结构化的”语言”来给出和传达。我认为这就是瓶颈所在——我们没有其他工具。一旦我们去描述它,它就被结构化、模块化,就被看作”模式”了。
由此我们面临两种选择:
A. 如果模式 Z 能够被它的结构性描述完全捕获,那么原则上它就可以通过计算来实现。但如果我们仍然要坚持 Lerchner 的立场,就必须说明:模式 Z 中存在某些部分,即使它有结构性描述,也无法被计算;或者说,在”捕获”完成的那一刻,有某些不可计算的效应被丢失了。我认为,“对计算的捕获”与”计算本身”并不是一回事,而真正值得进一步论证的,恰恰是这中间的裂缝。
B. 我们一开始就承认:意识依赖于某种非结构性的东西——某种属于生物学理论的性质,任何形式化描述都无法触及。但如果是这样,那么构造一个有意识的 AI,就好比制造一个”生物超人”,这会带来更多关于 AI 的客观道德问题,并且很可能根本无法实现。
我同意 AI 几乎不可能拥有意识,但更进一步的思考也是值得的。基本的想法是:我们越是接近”如何计算这种意识”的答案,计算与物理实在之间的边界就越可能变得模糊——这正是我在 A 里所理解的方向。
所以这并不是一个数据是否足够的问题,而是一个什么样的答案才是有意义的问题。如果存在一位”造物主”或那个”唯一者”(the ONE),我们就必须先搞清楚人类意识是如何被创造出来的;如果不存在,我们仍然要搞清楚——意识究竟是什么。
论文:https://philarchive.org/rec/LERTAF#AI #Consciousness #PhilosophyOfMind
Recently I read an interesting paper by Alexander Lerchner: The Abstraction Fallacy from Google DeepMind that basically holds the core arguments that AI cannot be conscious because for computation, basically it is dependent on map-maker. We could perfectly map the whole word or whole brain doesn’t necessarily mean that we could make such map to be the brain or have human consciousness. I think the intuition is that functional equivalence is not ontological equivalence.
The intuition is strong, and it lets me have the further question.
Now suppose, oneway, as human civilization we identify a pattern Z, the specific physical or dynamical condition that creates consciousness. I have such idea: Any description of such pattern Z must be given and conveyed in human structured LANGUAGE. I think this is the bottleneck, we don’t have other tool. The moment we describe it, it becomes structured, modular and could be seen as patterns.
Now we have two options:
A. Now if pattern Z is fully captured by its structural description, then in principle it could be reached by computation. Now in order to still follow Lerchner’s idea, we have to show that some part of pattern Z cannot be computed even though it has structural description, or what kind of effects that cannot be computed are dropped when capture is done. I think Capture of computation and computation itself is not the same, and the gap here is exactly where the further arguments should be.
B. We admit originally that consciousness depends on something that is not structural – a property of biological theory that no formal description could reach. But then building such conscious AI is like building biological Superman, it would raise more AI objective morality and could be very impossible to reach.
I agree on that AI can nearly not have consciousness, but further points are also worth thinking about. The basic idea is that, the more we reach to that answer of how to computation of such consciousness. The boundary between computation vs physical reality might blur, as my understanding in A.
So this is not a problem of lacking data or not, it is a problem of what kind of answer is meaningful. And if there is a creator or the ONE, we have to first figure out how human consciousness is created, if there is not, we still have to figure out what is consciousness.
Paper: https://philarchive.org/rec/LERTAF
夜雨聆风