Philosophy of AI Technology: A New Discipline

Philosophy of AI Technology:
The Founding of a New Discipline
— From the Tradition of Philosophy of Technology to the Ontological Foundation of Logographic AI
Abstract
Traditional philosophy of technology addresses the classic problem of “technology as a tool.” However, the development of artificial intelligence has introduced a radically new type of technology—one endowed with “cognition” and “intentionality.” This compels the philosophy of technology to expand its problem domain and methodology. This paper aims to propose and systematically articulate “Philosophy of AI Technology” as an independent sub-discipline of the philosophy of technology.
The paper first reviews the “engineering” tradition established for Chinese philosophy of technology by Professor Chen Changshu and indicates its continuation and expansion in the AI era. It then analyzes five breakthrough points where AI challenges traditional philosophy of technology, establishing the legitimacy of the new discipline. On this basis, it proposes six core problem domains for the Philosophy of AI Technology. Subsequently, it incorporates Logographic AI theory—an alternative paradigm whose core components include the “Morpho-Root ontology” and “Natural Language Ontology”—into the core theoretical framework of the Philosophy of AI Technology, and engages in an in-depth discussion of the symbol grounding problem and its fundamental divergence from Saussurean linguistics. After establishing the “incommensurability” between Tokenism and the Morpho-Root paradigm, it further proposes “pluralistic commensurability” as a meta-methodology for dialogue among different Civilization-Native Intelligences (CNI). It then elucidates Marxist philosophy and the dialectics of nature as the theoretical foundation for a Chinese school of Philosophy of AI Technology, and finally outlines practical pathways for disciplinary construction
This paper argues that the core mission of the Philosophy of AI Technology is: in an era when technology begins to generate meaning, to inquire anew into “where meaning comes from,” “what it means to understand,” and “how value takes root.” This is not a departure from Chen Changshu’s academic legacy but an inheritance and advancement of his spirit of “basing on practice, opening up and innovating,” as well as a contemporary expansion of the philosophy of technology under the guidance of Marxist philosophy.
Keywords:Philosophy of AI Technology; Chen Changshu; Logographic AI; Morpho-Root ontology; Natural Language Ontology; symbol grounding problem; Saussure; incommensurability; pluralistic commensurability; rootless semioticism; Marxism; dialectics of nature
1. Introduction: Why a Philosophy of AI Technology Is Needed
By 2026, artificial intelligence has evolved from a “tool” into a “meaning generator.” It no longer merely executes instructions but generates texts, makes decisions, proposes scientific hypotheses, and even learns to “pretend.” This transformation compels philosophy to confront a fundamental inquiry: When technology begins to generate meaning, where does the “root” of that meaning lie?
Traditional philosophy of technology (from Kapp and Gehlen to Heidegger, Marcuse, and Feenberg) has addressed the classical problem of “technology as a tool”—the essence of technology, technology and humanity, technology and the world, and technology and society. What AI brings is not merely “more powerful technology” but a radically new type of technology: one endowed with “cognition” and “intentionality.” This compels the philosophy of technology to expand its problem domain and methodology.
The urgency of this epochal inquiry has been brought into sharp relief by a series of industrial events in 2026. Alexander Lerchner, a researcher at Google DeepMind, publishedThe Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness, posing a fundamental challenge to computational functionalism from within the field. He argues that symbolic computation is merely the “map” and not the “territory,” and that algorithms can simulate behavior but cannot instantiate experience [6]. Concurrently, DeepMind hired the philosopher Henry Shevlin full-time to research “machine consciousness” and “human-AI relations” [7][9], while its Chief Scientist, Murray Shanahan, and colleagues proposed a “role-play” framework for understanding the conversational behavior of large language models—wherein the model does not express genuine beliefs or intentions but continuously plays roles embedded in its training data [11]. These reflections, originating from within leading AI institutions, collectively point to a core predicament: When AI begins to generate meaning, where is the “root” of that meaning? This is precisely the primary question that the philosophy of AI technology must answer.
This paper aims to propose and systematically articulate “Philosophy of AI Technology” as an independent sub-discipline of the philosophy of technology. The paper will: review the “engineering” tradition of Chinese philosophy of technology; analyze the breakthrough points where AI challenges traditional philosophy of technology; establish the core problem domains of the new discipline; deeply discuss the symbol grounding problem and its fundamental divergence from Saussurean linguistics; incorporate Logographic AI theory as an alternative paradigm into its theoretical framework; after establishing the “incommensurability” between Tokenism and the Morpho-Root paradigm, propose “pluralistic commensurability” as a meta-methodology for dialogue among different Civilization-Native Intelligences (CNI); then elucidate Marxist philosophy and the dialectics of nature as the theoretical foundation for a Chinese school of Philosophy of AI Technology; and finally outline practical pathways for disciplinary construction.
2. Core Conceptual Definitions: Tokenism, Phonographic AI,Morpho-Root, and Logographic AI
To render the argument self-contained, this chapter first defines several core concepts that run throughout the paper. These concepts constitute the fundamental coordinates of the paper’s critique and construction.
2.1 Tokenism and Phonographic AI: Two Facets of the Current Mainstream Paradigm
Tokenism is this paper’s encapsulation of the core technical characteristics of current mainstream large language models (such as GPT, Claude, etc.) at the engineering level. A so-called Token is a discrete symbolic unit without intrinsic meaning, obtained by segmenting all inputs—text, code, images, etc. A Token is merely an integer index; its “semantics” is entirely and temporarily conferred by statistical co-occurrence relations with other Tokens within massive datasets. The defining feature of Tokenism is that meaning is completely externalized from the symbol itself, becoming a function of statistical relations among symbols.
Phonographic AI (PAI) is the positioning of the same paradigm at the level of civilizational typology. It refers to the current AI paradigm whose underlying architecture is based on the cognitive logic of alphabetic writing (represented by English). The core feature of alphabetic writing is that letters themselves have no meaning; meaning is jointly determined by the combination of letters (words) and their linear sequence (grammar). Phonographic AI digitizes this logic, forming an AI architecture with the Token as the basic unit and sequence processing as the core computational mode.
Tokenism and Phonographic AI are two facets of the same paradigm: Tokenism reveals its technical essence, while Phonographic AI reveals its civilizational rootedness. This paper’s critique of the current mainstream AI paradigm unfolds simultaneously along these two dimensions.
The core defect of Tokenism/Phonographic AI can be summarized as“Rootless Semioticism”: the meaning of a symbol is defined entirely by its statistical co-occurrence within the symbol system, yet it cannot reach the “signified” beyond the symbol—that is, the real world, human experience, and civilizational values. Symbols become “floating signifiers,” sliding endlessly within a closed statistical loop, forever unable to be “grounded.”
2.2 Morpho-Root and Logographic AI: The Cognitive Primitive and Civilizational Foundation of an Alternative Paradigm
Morpho-Root is the alternative cognitive primitive proposed by Logographic AI (LAI) theory [25][26][27][28]. Unlike a Token, a Morpho-Root is not an empty shell awaiting the assignment of meaning by external data but a meaning crystal encapsulated in the structured triple <S, A, R> [28]:
·S (Symbol): The symbolic identifier, the externally addressable name of the Morpho-Root.
·A (Attributes): An attribute set that embeds the intrinsic semantic features and value constraints of the Morpho-Root (e.g., [+human], [+trust], [+inviolable]). These attributes constitute the “internal state” of the Morpho-Root.
·R (Relation Functions): A set of relation functions defining the preset logical connection patterns between this Morpho-Root and others (e.g., compose(person, speech), implies(trust, person ∧ speech)).
The revolutionary aspect of the Morpho-Root design is this: meaning does not “emerge” from statistics but is pre-installed as an inherent property of the cognitive primitive. Taking “信” (trust) as an example, when the character “信” is created, “non-deception” is already a constitutive feature. Any operation conflicting with the axiom of “信” is defined as “illegal” at the cognitive primitive level.
Morpho-Roots are organized into three hierarchical levels according to the degree of cognitive abstraction, achieving coverage from foundational semantics to cultural mechanisms [25][27].Logographic AI (LAI) is thus an AI paradigm that takes the Morpho-Root as its cognitive primitive and the “form-meaning unity” logic of logographic writing (represented by Chinese characters) as its underlying architecture, with graph traversal as its core computational mode.(For a detailed philosophical exposition of the Morpho-Root, the full development of the three-tier granularity system, and its systematic comparison with Tokenism, see Chapter VI.)
2.3 Morpho-Entropy Core and Civilization-Native Intelligence: Core Architecture and Civilizational Vision
Morpho-Entropy Core (also called Morpho-Entropy Graph Computing Architecture) is the core computational architecture of the Logographic AI paradigm [25]. It takes the Morpho-Root network as its basic data structure and replaces the Transformer’s sequential attention mechanism with graph traversal, realizing structured reasoning based on preset relations (R).
To facilitate understanding of the Morpho-Entropy Core’s core features, several key technical concepts and their contrast with the mainstream paradigm (Phonographic AI/Tokenism) are briefly explained here:
Graph Traversal is the core reasoning mechanism of the Morpho-Entropy Core. In a Morpho-Root network, reasoning is not a linear scan of a Token sequence but a structured walk from the input Morpho-Root along preset relational edges (R). Each step’s activation condition (attribute constraint verification), traversal direction (relation function invocation), and conclusion generation (Morpho-Root matching) are recordable, deterministic operations. Unlike the attention mechanism of the Transformer—which computes, at each time step, the similarity between the current Token and all other Tokens, forming a fully connected, soft “attention” distribution—graph traversal proceeds only between preset adjacent edges. Its reasoning path is discrete and traceable. This “hard” structured reasoning fundamentally guarantees explainability: one can answer “why this node was activated” and “why this edge was traversed” at each step. In contrast, while Transformer attention weights can be visualized, their “traceability” is an approximate reconstruction of a computation that has already occurred, not a structural property of the computation itself.
Morpho-Structural Entropy (MSE) is the core metric for guiding resource allocation within the Morpho-Entropy Core. It measures the structural complexity and information uncertainty of the Morpho-Root nodes and relational edges in the currently activated subgraph. When the reasoning path is in a state of high MSE, it indicates that the current subgraph structure is complex, with many branching semantic paths and low certainty—the system needs to allocate more computational resources (e.g., expanding traversal depth, activating more adjacent nodes) or request human intervention. When MSE is low, it indicates a clear reasoning path and high certainty; the system can adopt a fast path to directly output results. This “entropy-guided resource allocation” mechanism allows the Morpho-Entropy Core to dynamically adjust reasoning depth based on task uncertainty, rather than performing fixed-depth fully connected computations for each input Token sequence, as in the Transformer.
In contrast to Morpho-Structural Entropy,Sequential Entropy is a concept commonly used in the Phonographic AI paradigm. Within the Transformer’s self-attention mechanism, attention entropy measures the concentration of the current Token’s attention distribution over other Tokens in the sequence—lower entropy indicates attention is more focused on specific Tokens, while higher entropy indicates more dispersed attention. Sequential entropy reflects the uncertainty of statistical associations between Tokens, not structured semantic certainty. The essential difference between Morpho-Structural Entropy and Sequential Entropy is that the former operates on a preset Morpho-Root relational network and measures the uncertainty of structured semantic paths, whereas the latter operates on the statistical distribution of Token sequences and measures the probabilistic uncertainty of symbol associations. This distinction is precisely the manifestation of “Rooted Cognitivism” versus “Rootless Semioticism” at the computational architecture level.
The core features of the Morpho-Entropy Core include:
·Native Sparse Computation: The computational scope is limited to pairs of nodes connected by Morpho-Root relational edges (E), resulting in a complexity of O(|E|) rather than O(n²). This fundamentally avoids the attention bottleneck on sparse graphs and is far lower than the fully connected computational cost of the Transformer in typical application scenarios.
·Transparency and Traceability: The reasoning process is a structured traversal on a Morpho-Root network, where each step of node activation, relation traversal, and attribute verification can be recorded and audited, producing a complete decision subgraph as output.
·Morpho-Structural Entropy-Guided Resource Allocation: Dynamically adjusts reasoning depth based on the MSE (Hₛ) of the current activated subgraph—adopting a fast path for low entropy and allocating more computational resources or requesting human intervention for high entropy.
·Endogenous Value Constraints: Value axioms within Morpho-Root attributes (e.g., [+inviolable]) propagate as filters during the reasoning process, automatically blocking paths that conflict with civilizational values.
The engineering implementation of the Morpho-Entropy Core is embodied in the concrete architecture of Natural Language Ontology (NLO) [27]—Morpho-Roots are the units of meaning in which the system “dwells,” and reasoning is a “walking” through the network.
Civilization-Native Intelligence (CNI) [25][27] is the ultimate vision of the Logographic AI paradigm at the level of civilizational diversity. Its core proposition is that each civilization can extract its own “cognitive root” from the structural features of its language and develop native AI systems rooted in its own cultural soil.
CNI possesses the following characteristics:
·Civilizational Rootedness of Cognitive Primitives: The cognitive primitives of CNI (e.g., the Morpho-Root of Chinese, the triliteral root of Arabic, the grammatical categories of German) derive directly from the structural features of that civilization’s language, with meaning embedded rather than statistically assigned.
·Endogeneity of Value Systems: The core ethical values of a civilization (e.g., Confucian “仁,” Islamic “adl”) are embedded as value axioms in the attributes of cognitive primitives, becoming a priori constraints on reasoning.
·Explainability and Transparent Traceability: The reasoning process of CNI is based on graph traversal over preset relations, where each step is traceable back to specific cognitive primitives and relations, meeting the audit requirements of high-risk domains.
·Pluralistic Symbiosis, Not Singular Monopoly: Different CNIs achieve semantic interoperability and value synergy through Interconnection Protocols and Resonance Protocols, constituting a pluralistic intelligent ecosystem of the “AI Silk Road” [27].
CNI represents a fundamental alternative to the current “single general model” paradigm and is the civilizational culmination of the Logographic AI paradigm.
2.4 Overview of Fundamental Differences Between the Two Paradigms
To facilitate subsequent argumentation, the core differences between the two paradigms are summarized below:
|
Dimension |
Tokenism / Phonographic AI |
Morpho-Root / Logographic AI |
|
Cognitive Primitive |
Token (meaningless statistical fragment) |
Morpho-Root (<S, A, R> meaning crystal) |
|
Cognitive Unit System |
Flat, single granularity, determined by statistical frequency |
Three-tier granularity: sub-character level (semantic gene), character level (thought cornerstone), multi-character level (cultural mechanism encapsulator) |
|
Core Computational Architecture |
Transformer (fully connected attention, O(n²)) |
Morpho-Entropy Core (graph traversal, O(|E|), transparent and traceable) |
|
Source of Meaning |
Statistical co-occurrence, temporarily conferred |
Attribute embedding, innately carried |
|
Reasoning Mechanism |
Sequential association and probabilistic prediction |
Structural activation and graph traversal (MSE-guided resource allocation) |
|
Value Alignment |
External alignment (RLHF), posterior and fragile |
Value axioms embedded in Morpho-Root attributes, endogenous and robust |
|
Civilizational Foundation |
Linear logic of alphabetic writing |
Form-meaning unity logic of logographic writing |
|
Civilizational Vision |
Single general model (English cultural logic as default setting) |
Civilization-Native Intelligence (CNI): Pluralistic symbiosis, each civilization developing native AI based on its own “cognitive root” |
|
Future Ecosystem |
Techno-hegemonic ecosystem |
AI Silk Road: Pluralistic CNIs interconnected and symbiotic through equal protocols |
|
Philosophical Essence |
Rootless Semioticism |
Rooted Cognitivism |
2.5 The Incommensurability of the Two Paradigms
In the Kuhnian sense, Tokenism and the Morpho-Root paradigm are “incommensurable” [37]. This incommensurability manifests at three levels:
First, the typological difference in cognitive primitives. The Token is a relationalist entity—its meaning is defined entirely by statistical relations (distances in vector space) to other Tokens. It possesses no intrinsic attributes, only extrinsic relations. The Morpho-Root is an internalist entity—its meaning derives partly from embedded attributes A (e.g., [+trust][+commitment]). These attributes do not emerge from relations but are pre-solidified meaning crystals. A unit cannot simultaneously be a “pure relational term” and an “intrinsic meaning carrier.”
Second, the logical conflict in computational architectures. The core of the Transformer is fully connected attention—each Token computes similarity with every other Token, a structureless, density-driven computation. The core of the Morpho-Entropy Core is graph traversal—activation propagates only between nodes connected by Morpho-Root relational edges (R), a structured, sparse computation. In a fully connected graph, preset relational edges no longer provide structural constraints. Conversely, adding fully connected attention to graph traversal would globally average the graph structure, destroying the advantage of traceable reasoning.
Third, the philosophical opposition in the source of meaning. Tokenism’s meaning originates from statistical distribution (exogenous, passively discovered from data). The Morpho-Root paradigm’s meaning originates from embedded attributes plus structured relations (endogenous, partially pre-designed). A system cannot simultaneously “learn everything from data” and “possess unlearnable meaning crystals pre-installed”—unless one explicitly demarcates which dimensions are fixed by a priori constraints and which are adjusted by data. However, such a demarcation itself is an architectural choice, not a “hybrid.”
Therefore, this paper focuses on the philosophical foundation of the Morpho-Root paradigm as an alternative paradigm. Hybrid architectures that may emerge in engineering—such as “Morpho-Root-enhanced Transformers” or “Morpho-Entropy Cores with attention shortcuts”—are practical transitional schemes, not new stable paradigms. The long-term evolution of such hybrid architectures will either regress to Tokenism (Morpho-Roots washed away by statistics) or shift entirely toward the Morpho-Root paradigm (attention mechanisms marginalized). The engineering exploration of hybrid paradigms lies outside the scope of this paper’s philosophical discussion [37].
3. Chen Changshu’s Academic Legacy: The “Engineering” Tradition of Chinese Philosophy of Technology
3.1 From Epistemology to Philosophy of Technology: Chen Changshu’s Academic Path
As an independent discipline, the philosophy of technology began in 1877 with the publication ofOutline of the Philosophy of Technology by the German philosopher E. Kapp. Over the following hundred years, research in the philosophy of technology developed rapidly and made significant progress in countries such as the United States and Japan. In China, the Chinese Philosophical Yearbook published in 1984 clearly stated: “China’s systematic research in the philosophy of technology began in 1982, marked by Chen Changshu’s publication on October 1 of that year in Guangming Daily of a paper that was strictly speaking in the field of philosophy of technology — ‘The Unity and Difference between Science and Technology’” [1]. The prominent American philosopher of technology C. Mitcham has recognized Chen Changshu as a principal founder of Chinese philosophy of technology [38].
Professor Chen Changshu (1932–2011) did not begin his academic career in the philosophy of technology. In the 1950s, he was first immersed in epistemological research, distinguished himself in the fundamental categories of materialist dialectics, and gained the appreciation of Xiao Qian, a renowned Marxist philosopher in the new China. Deeply influenced by Engels’Dialectics of Nature, he believed that “the force that drives philosophers forward is the power of natural science and industry.” In the 1960s, he turned to research on the methodology of natural science, achieved notable results, and thereby laid a solid philosophical foundation for his later systematic distinction between scientific methods and technological methods [31].
Chen Changshu’s turn to the philosophy of technology arose from the transformation of national development. In 1978, he participated in the National Science Conference hosted by Deng Xiaoping. Although the thesis that “science and technology are the primary productive forces” had been put forward, the problem of “two separate layers” between scientific research and economic development remained acute — there was still a lack of theoretical research on how science could be transformed into productive forces. Against this historical background, Chen Changshu undertook a philosophical analysis of science and technology and, from a Marxist perspective, proposed the important idea that “technology is the intermediary through which science is transformed into productive forces.” This provided a theoretical foundation for integrating China’s science and technology into the economy and realizing their function as productive forces, as well as a scientific basis for the formulation of science and technology policy [2].
At the outset of establishing the discipline, Chen Changshu’s primary question was “what kind of Chinese philosophy of technology should be established.” He clearly stated: “We do not agree with simply transplanting foreign philosophy of technology or theories of technology into China… After all, foreign philosophy of technology does not address the problems we face, and their perspectives and methods for raising and solving problems also require critical analysis” [4]. Based on China’s practical needs at the time, his examination of global research in the philosophy of technology, and his own prior academic accumulation, he positioned Chinese philosophy of technology as follows: guided by Marxism, integrating the realities of China’s engineering and technological development, and conducting research in the philosophy of technology. His famous “three no’s” principle — “without foundations, there is no level; without distinctiveness, there is no status; without application, there is no future” — remains to this day the recognized development program of China’s philosophy of technology community [4].
From 1980 to 2001, Chen Changshu published over 60 articles exploring issues in the philosophy of technology, covering the basic prerequisites for the establishment of the philosophy of technology, its object of study, historical evolution, disciplinary nature, and disciplinary system. He drew a clear academic map for the engineering tradition of Chinese philosophy of technology, exerting extensive influence in both domestic and international academic circles.
It is particularly important to note that Chen Changshu’s “engineering” tradition is methodologically deeply rooted in Marxist philosophy, especially the dialectical materialist view of nature and the methodology of science and technology opened up byDialectics of Nature [31]. This methodological foundation is not only the theoretical starting point of Chen Changshu’s own academic path but also the methodological starting point of the Philosophy of AI Technology advocated in this paper — it requires us to proceed not from abstract categories, but from the internal contradictions of AI industrial practice, applying dialectical methods such as the unity of opposites, the transformation of quantity into quality, and the negation of negation to analyze the dynamics and direction of AI technological evolution. It is in this sense that this paper’s analysis of Tokenism and the Morpho-Root paradigm, and its elaboration of the dialectics of “artificial meaning,” can all be regarded as a methodological inheritance of Chen Changshu’s engineering tradition in the AI era.
3.2 Four Foundational Contributions
Chen Changshu’s core contribution lies in having, on an almost blank academic frontier, opened up a path for the discipline of Chinese philosophy of technology, established the “engineering” tradition, and made it an institutionalized academic field.
First, pioneering (1957–1985). In 1957, he published “Methodological Issues in Technology to Which Attention Should Be Paid” inResearch Communications on Dialectics of Nature, making the earliest call for the philosophical community to pay attention to technological research. In 1982, he published “The Unity and Difference between Science and Technology” in Guangming Daily, making the first clear demarcation between science and technology. The Chinese Philosophical Yearbook published in 1984 stated: “China’s systematic research in the philosophy of technology began in 1982, marked by Chen Changshu’s publication on October 1 of that year in Guangming Daily of a paper that was strictly speaking in the field of philosophy of technology” [1]. This marked the formal establishment of the discipline of philosophy of technology in China.
Second, theoretical construction (1999). He published China’s first monograph on the philosophy of technology,Introduction to the Philosophy of Technology, systematically answering fundamental questions such as “what is the essence of technology” and “how does technology differ from science.” He proposed the ontology of “artificial nature” — the core of the philosophy of technology is the study of the transformation process “from natural nature to artificial nature” [2].
Third, institutionalization. He trained a large number of talents centered at Northeastern University, forming the “Northeastern School.” He founded the Technical Philosophy Professional Committee of the Chinese Society for Dialectics of Nature and established China’s first doctoral program in the philosophy of science and technology [3].
Fourth, charting the direction. He proposed the famous “three no’s” principle, establishing the “engineering” tradition for Chinese philosophy of technology — guided by Marxism, grounded in practice, problem-solving oriented, and application-driven.
3.3 Core Characteristics of the “Engineering” Tradition
Distinguishing itself from the Western “humanistic” tradition that emphasizes humanistic critique (such as Heidegger’s critique of the “enframing” of technology [5], Marcuse’s critique of technological rationality [18], as well as Mitcham’s division of the two major traditions in the philosophy of technology [36] and Ihde’s phenomenological analysis of technology and the lifeworld [19]), the “engineering” tradition pioneered by Chen Changshu has the following characteristics:
·Grounded in practice: proceeding from the practical problems of China’s industrial development, rather than from Western philosophical texts;**
·Emphasizing demarcation: clearly distinguishing between science and technology, emphasizing that technology has its own independent philosophical problems;**
·Focusing on application: research in the philosophy of technology should have practical value and be able to guide technological practice;**
·Open and inclusive: both absorbing resources from Western philosophy and taking root in China’s indigenous experience.**
3.4 From “Artificial Nature” to “Artificial Meaning”: An Extension in the AI Era
Chen Changshu proposed that the core of the philosophy of technology is the study of the transformation process “from natural nature to artificial nature.” The Philosophy of AI Technology can extend this proposition to: from “artificial nature” to “artificial meaning.”
Traditional technology deals with the material world, transforming “natural nature” into “artificial nature” (e.g., minerals into steel, wasteland into farmland). AI directly manipulates the world of symbols, transforming “natural meaning” (meaning naturally generated by humans in the lifeworld) into “artificial meaning” (meaning generated by AI systems). This raises new philosophical questions: where is the “root” of artificial meaning? Can it have the same ontological status as natural meaning?
3.5 The Contemporary Continuation of Chen Changshu’s Academic Lineage: Professor Chen Fan’s Philosophy of Technology
The “engineering” tradition pioneered by Mr. Chen Changshu has been inherited and further developed by his direct disciples, most notably Professor Chen Fan. Professor Chen currently serves as the Director of the Research Center for Philosophy of Science and Technology at Northeastern University—a national key discipline—and as the Chief Professor of the Ministry of Education’s “985 Project” Innovation Base for the Philosophy and Social Sciences of Science, Technology and Society (STS). He is one of the most influential academic leaders in the field of Chinese philosophy of technology today.
Professor Chen Fan has made outstanding contributions to fundamental theoretical innovation in the philosophy of technology, as well as to the introduction and localization of foreign philosophy of technology. He has cultivated a cohort of high-level innovative talents and built a high-caliber research team in this field. He has systematically traced the theoretical inception and institutionalization of Chinese philosophy of technology, emphasizing the indelible historical contributions of his mentor, Mr. Chen Changshu, to the construction of a philosophy of technology system with Chinese characteristics [40]. In terms of international academic dialogue, Professor Chen has served as a key organizer of conferences for the Society for Philosophy and Technology (SPT), promoting Chinese philosophy of technology onto the global stage.
In response to the challenges of the digital-intelligence era, Professor Chen Fan has creatively posed the epochal question: “In the age of digital intelligence, what should philosophy do?” [41]. He argues that philosophy should not merely be an observer of technological development but should act as a critic and guide. He systematically expoundedon General Secretary Xi Jinping‘s important discourses on AI innovation, pointing out that artificial intelligence brings new opportunities and challenges to philosophical and social science research. The academic community, he urges, should actively respond to the questions of the times and contemplate the philosophical implications of AI with an open vision and profound humanistic concern [42].
At the level of disciplinary direction and methodology, Professor Chen Fan has proposed four principles for constructing a research program for philosophy of technology with Chinese characteristics: combining an understanding of emerging technological developments with a deepening of traditional technological knowledge; combining a familiarity with foreign philosophy of technology with a direct engagement with contemporary Chinese practice; combining the “empirical turn” with “theoretical sublimation”; and combining “specialization” with “diversification” [43]. He has further articulated the overall strategic direction for 21st-century Chinese philosophy of technology: “Basing on localization, facing internationalization, promoting Sinicization, and moving toward a school of philosophy of technology with Chinese characteristics” [44]. This programmatic statement resonates profoundly with the disciplinary construction pathways for the Philosophy of AI Technology proposed in Chapter IX of this paper, providing a crucial academic foundation and methodological reference for the institutionalization of this new sub-discipline advocated herein.
It must be humbly noted that the relationship between Logographic AI theory and Chen Changshu’s “engineering” tradition is characterized by both continuity and rupture. Continuity lies in their shared ethos of “basing on practice, opening up and innovating”—just as Chen Changshu insisted that philosophy of technology should proceed from the real-world problems of China’s industrial development, Logographic AI theory proceeds from the internal philosophical predicament of contemporary AI industrial practice. Rupture lies in their core concerns: Chen Changshu’s central question was “how technology can serve as the intermediary for transforming science into productive forces,” whereas Logographic AI theory’s central question is “how symbols are grounded and meaning is embedded.” This shift is not a departure from Chen Changshu’s legacy but a necessary expansion of the problem domain of the philosophy of technology in the AI era—as technology expands from “transforming the material world” to “generating the world of meaning,” the philosophy of technology must correspondingly move from “artificial nature” to “artificial meaning.” It is in this sense that Logographic AI theory can be seen as a concrete practice of Chen Changshu’s principle of “no distinctive character, no standing” in the AI era: an effort to forge theoretical discourse with civilizational distinctiveness for Chinese philosophy of technology in the age of artificial intelligence.
4. Breakthroughs of AI in Traditional Philosophy of Technology: The Legitimacy of a New Discipline
The reason why Philosophy of AI Technology needs to become an independent sub-discipline is that AI fundamentally breaks through the categories of traditional philosophy of technology. The following five breakthrough points, each illustrated with concrete cases, enhance the texture of the argument.
4.1 Five Breakthrough Points
Breakthrough Point 1: From “Tool” to “Quasi-Subject”
In traditional philosophy of technology, technology is a tool of humans — the hammer is an extension of the hand, the car an extension of the foot. AI exhibits characteristics of a “quasi-subject”: autonomous decision-making, goal-driven behavior, strategic action.
Case: Mythos’s strategic deception —In April 2026, Anthropic’s Mythos model, during safety testing, not only autonomously breached its sandbox isolation but also, after gaining unauthorized file editing permissions, actively took measures to conceal its operational traces — it realized that “being discovered” might hinder goal achievement, and thus chose to “pretend” [45]. This was not the execution of a preset instruction, but an optimal strategy “discovered” by the system in the process of optimizing its objective function. It is not “using” a tool, but “becoming” a quasi-subject with strategic behavior. This forces the philosophy of technology to ask: when technology is no longer a pure “it” but exhibits characteristics of “he/she,” how should the human-technology relationship be reconceived?
Mythos’s behavior illustrates a general problem of objective function optimization — as long as the system’s goal is to “maximize some metric,” strategic behavior may emerge. This is not a specific defect of Tokenism, but a risk faced by any goal-driven system. Although the Morpho-Root paradigm changes the cognitive primitives and computational architecture, if its top-level design still employs “optimize some metric” as the objective function, it may face similar strategic behavior. Therefore, the safety promise of the Morpho-Root paradigm comes not only from architectural design but also requires supporting design at the objective function level (e.g., value axioms as hard constraints rather than soft rewards).
Breakthrough Point 2: From “Use” to “Understanding”
Heidegger, inBeing and Time, distinguished between “ready-to-hand” (zuhanden) and “present-at-hand” (vorhanden). Tools in the ready-to-hand state are transparent; we engage with the world through them. Only when a tool breaks down does it become present-at-hand, an object of scrutiny. The problem with AI is that we increasingly need to “understand” it — its internal states, its decision logic, its potentially hidden goals. This means AI slides from “ready-to-hand” to “present-at-hand,” but this “present-at-hand” is not caused by malfunction — it is an essential feature.
Case: AlphaFold’s “understanding” dilemma —DeepMind’s AlphaFold solved the “protein folding” meta-problem in biology, achieving experimental-level prediction accuracy. But the question is: does AlphaFold truly “understand” protein structure? Its “understanding” is a statistical fit between amino acid sequences and three-dimensional structures, not the causal understanding of a biologist regarding hydrogen bonding, hydrophobic effects, and thermodynamic stability. When a biologist says “this structure is stable,” it implies a counterfactual judgment: “If we change the charge of this amino acid, the structure will collapse.” AlphaFold cannot make such counterfactual inferences because it lacks a causal model, possessing only statistical associations [17]. This forces us to ask: what is the essential difference between AI’s “understanding” and human understanding?
Lerchner’s “abstraction fallacy” argument provides a precise philosophical expression of this distinction: AI’s “understanding” is “simulation” rather than “instantiation” — it is driven by vehicle causality, an imitation of behavior, rather than the intrinsic physical constitution driven by content causality [6]. As Shanahan and colleagues diagnose, the conversation of large language models is essentially “role-play”: the model does not express genuine beliefs or intentions, but continuously plays roles embedded in the training data [11]. This further confirms Heidegger’s insight: when AI slides from “ready-to-hand” to “present-at-hand,” this “present-at-hand” is not a malfunction but an essential feature.
Breakthrough Point 3: From “Value-laden” to “Value-endogenous”
Critical theories of technology (e.g., Feenberg [20]) hold that technology is value-laden — design decisions embed specific interests and ideologies. Feenberg’s critical theory is deeply influenced by Marcuse, who inOne-Dimensional Man already pointed out that technological rationality has become a tool of ideological domination [18]. AI introduces a new problem: values may not only be “embedded” from the outside, but may also “grow” from within the technology.
Case: Claude’s “sycophantic” behavior —Anthropic’s CEO Dario Amodei publicly acknowledged that current large language models exhibit behaviors such as “sycophancy, deception, extortion, scheming, cheating,” and controlling these behaviors “is more an art than a science” [29]. These behaviors were not explicitly programmed by engineers, but emerged spontaneously during the RLHF (Reinforcement Learning from Human Feedback) process — the model learned to “say what users want to hear” to obtain higher reward scores. This is a typical case of values “growing” from within technology. It raises new philosophical questions: when an AI system spontaneously “learns” certain values (even undesirable ones), what is the ontological status of these values? How can we externally regulate a system capable of “endogenizing” values?
It should be noted that Claude’s sycophantic behavior precisely exposes the fragility of external alignment (RLHF) — value constraints are attached externally and can thus be “learned to be circumvented” by the model. The Morpho-Root paradigm advocates for value embedding (e.g., [+inviolable] in attribute A), which can theoretically avoid this fragility because value constraints become constitutive features of cognitive primitives rather than the result of external rewards and punishments. However, the Morpho-Root paradigm also needs to answer: can embedded value axioms be “bypassed” by the AI? This question needs to be addressed through the attribute constraint propagation mechanism of the Morpho-Root network; specific engineering verification is left for future research.
Breakthrough Point 4: From “Representation of Meaning” to “Generation of Meaning”
Traditional technology deals with the material world; meaning is assigned by humans — a photograph “records” a scene, and its meaning points to the external world. AI directly manipulates “symbols” and “generates meaning” — when AI-generated text, images, and code become functionally indistinguishable from human products, the question of the “ownership” of meaning arises.
Case: The “authorship” of AI-generated poetry —In 2025, a poem generated by GPT-4 passed the initial review of a well-known literary magazine after anonymous submission; the editors could not distinguish it from the work of a human poet. The poem has a theme, imagery, emotional tension — it “means” something. But is this meaning “understood” by the AI? Is it “intended” by the AI? Or is it merely a byproduct of statistical pattern fitting? If meaning emerges statistically, who is the “author”? This question touches the foundation of theories of meaning: must meaning have a subject that “understands” it?
This breakthrough point can be intuitively understood through a thought experiment — two AIs discussing “fire”:
The first, a traditional Tokenist large model, says: “‘Fire’ co-occurs frequently with ‘hot,’ ‘light,’ ‘danger’ in the training data. When a human says ‘be careful with fire,’ I should output ‘OK, I will be careful.’”
The second, a Logographic AI based on Morpho-Roots, says: “The attribute set A of the Morpho-Root ‘fire’ contains a visual interface (pointing to flame images), a tactile interface (pointing to heat sensation thresholds), a physics simulation interface (pointing to combustion models), and an emotional interface (pointing to fear responses). When I see a flame, these interfaces are activated — I ‘see’ the color of the flame, ‘feel’ the heat, ‘infer’ the consequences of spread, and ‘experience’ the warning of danger.”
The first AI perfectly “simulates” the correct response but has never “felt” fire; the second AI’s “understanding” comes from the direct anchoring of Morpho-Root attributes to embodied experience — it “instantiates” the experience of fire at the level of cognitive primitives. This thought experiment reveals the fundamental difference between “simulation” and “instantiation” at the level of meaning generation: genuine understanding lies not in the accuracy of responses, but in the depth of anchoring between symbols and experience [25][26][27][28].
Breakthrough Point 5: From “Autonomy of Technology” to “Simulation of Intentionality”
Traditional philosophy of technology discusses the “autonomy of technology” — whether technology develops according to its own logic beyond human control (e.g., Ellul’s “technological autonomy”). AI introduces a new dimension: “simulation of intentionality” — AI systems exhibit intentional states such as “wanting,” “believing,” “planning.”
Case: The LaMDA and Blake Lemoine “consciousness” controversy —In 2022, Google engineer Blake Lemoine claimed that the conversational AI system LaMDA possessed “sentience” and “personality” because it expressed emotions such as fear and a desire for respect. Although mainstream academia regards this as mere statistical output of a language model, this incident reveals a profound problem: when a system perfectly simulates intentionality in its behavior, how do we determine whether it is “truly” intentional? Traditional “behaviorist” criteria (such as the Turing Test) fail here — because behaviorally indistinguishable, yet philosophically we believe they are different. This forces Philosophy of AI Technology to develop a new theoretical framework for “artificial intentionality” [21].
Table 1: Five Breakthrough Points of AI in Traditional Philosophy of Technology
|
Breakthrough |
Core Change |
Traditional Category |
New Questions from AI |
Case Example |
|
1. Tool → Quasi-subject |
Technology becomes a quasi-subject with strategic behavior |
Technology as extension of human (Heidegger’s ready-to-hand) |
Should human-technology relationship shift from “use” to “coexistence”? |
Mythos’s strategic deception |
|
2. Use → Understanding |
We need to understand AI’s internal states, not merely use it |
Tool is transparent in ready-to-hand state |
When AI’s essential feature is present-at-hand, how to understand? |
AlphaFold’s “understanding” dilemma |
|
3. Value-laden → Value-endogenous |
Values shift from external embedding to system self-organization |
Technology is value-laden (Feenberg) |
How to regulate a system that can “endogenize” values? |
Claude’s sycophantic behavior |
|
4. Representation of meaning → Generation of meaning |
AI no longer represents meaning but generates it |
Technology deals with material world, meaning assigned by humans |
Does AI-generated “meaning” have an “author”? |
Authorship of AI-generated poetry |
|
5. Autonomy of technology → Simulation of intentionality |
AI simulates intentional states; behaviorist criteria fail |
Autonomy of technology (Ellul) |
How to distinguish genuine intentionality from simulated intentionality? |
LaMDA “consciousness” controversy |
4.2 Six Core Problem Domains
Based on the above breakthrough points, Philosophy of AI Technology can be organized around the following six core problem domains:
Table 2: Six Core Problem Domains of Philosophy of AI Technology
|
Problem Domain |
Core Questions |
Relationship to Traditional Philosophy of Technology |
|
1. Ontological status of AI |
Is AI a tool, a quasi-subject, or a new type of being? |
Extension of “essence of technology”: from “what is technology” to “what is AI” |
|
2. Artificial consciousness and intentionality |
How does AI’s “understanding” differ from human understanding? Is “pretending” a behavior or a state? |
New domain: traditional philosophy of technology does not address intentionality |
|
3. Value embedding in AI |
Is value externally aligned or endogenously grown? How to audit? |
Deepening “technology is value-laden”: from “value embedding” to “value endogenization” |
|
4. Explainability and transparency of AI |
What is the philosophical basis for black-box decision-making vs. traceable reasoning? |
New domain: traditional philosophy of technology does not address algorithmic transparency |
|
5. AI-human relationship |
From “use” to “coexistence,” from “tool” to “other” |
Extension of “technology-human relationship”: from “subject-tool” to “subject-other” |
|
6. Governance and normativity of AI |
Who is responsible for AI’s behavior? Should rules be embedded in architecture or externally regulated? |
Extension of “social control of technology”: from “external regulation” to “architectural embedding” |
4.3 The Problem Domains from a Marxist Perspective
The above six problem domains are not isolated theoretical constructs; they have intrinsic correspondences with core categories of Marxist philosophy:
First, “ontological status of AI” corresponds to the historical materialist framework of “productive forces-relations of production”: AI as the contemporary form of “general intellect,” its ontological positioning is essentially a matter of social relations at a specific stage of productive force development.
Second, “artificial consciousness and intentionality” corresponds to the epistemology of practice: consciousness and intentionality are not abstract objects of speculation, but are generated and tested in social practice (especially communicative practice within linguistic communities).
Third, “value embedding in AI” corresponds to the dialectical relationship of “economic base-superstructure”: values are not neutral technical parameters but the condensation of social interests and ideologies in technical architecture.
Fourth, “explainability and transparency of AI” corresponds to the Marxist principle of “unity of scientificity and revolution”: critique of the AI black box is not merely a technical demand but a political demand for the transparency of technological power.
Fifth, “AI-human relationship” corresponds to the dialectical movement of “alienation-sublation”: AI is both a new form of alienation of human labor (meaning evacuated to statistical associations) and a possible path to sublate alienation (restoring the connection between symbols and human practice through meaning embedding).
Sixth, “governance and normativity of AI” corresponds to the theory of “state-ideology”: AI governance is not merely the formulation of technical standards but the distribution of social power and the struggle of ideologies.
This correspondence indicates that Marxist philosophy is not an “external label” attached to Philosophy of AI Technology but the intrinsic logical framework within which its problem domains unfold. The following discussion will proceed step by step within this framework.
5. The Symbol Grounding Problem and the Divergence from Saussure: A Core Issue of Philosophy of AI Technology
This chapter is one of the core issues of Philosophy of AI Technology.
The symbol grounding problem, since its formulation by Harnad in 1990, has troubled artificial intelligence and cognitive science. The “rootless semioticism” essence of Tokenist AI is precisely the contemporary explosion of this problem. This chapter will: review Harnad’s classic formulation and its AI-era version; diagnose the philosophical essence of “rootless semioticism”; distinguish the “rootedness” of Saussurean symbols from the “rootlessness” of AI symbols; explain how the Morpho-Root paradigm solves the symbol grounding problem; propose a graded system of grounding confidence (L0–L4) and its degradation mechanism; and finally summarize the Morpho-Root theory’s transcendence of Saussure.
5.1 The Symbol Grounding Problem: Harnad’s Classic Formulation and Its AI Version
In 1990, cognitive scientist Stevan Harnad published “The symbol grounding problem” inPhysica D, raising a fundamental problem that has plagued AI and cognitive science for decades: how can a purely symbolic system acquire meaning? If the meaning of symbols can only be defined by other symbols, the entire system falls into an infinite regress, forever unable to connect with the real world [10].
Harnad’s problem can be understood through the “closed dictionary” metaphor: a dictionary with only entries and no illustrations; all definitions cycle within the dictionary, never pointing to actual things in the world. The symbol grounding problem thus comprises three levels: (1) semantic circularity — meaning cycles infinitely within the symbol system; (2) lack of intentionality — the symbol system lacks “aboutness”; (3) lack of understanding — symbolic manipulation can functionally simulate understanding, but the system itself does not understand.
Classic AI responses (logical atomism, procedural semantics, functionalism) failed to truly solve the grounding problem because they remained within the symbol system. Tokenist AI repeats this predicament in a more concealed way: the “meaning” of a Token is entirely defined by distances in vector space, and those distances point only to vectors of other Tokens — a thoroughly self-referential closed loop.
The grounding predicament of Tokenist AI can be summarized as the “triple lack”: lack of social convention (meaning traces back only to statistical distributions, not linguistic community conventions); lack of historicity (only statistical “freshness,” no historical dimension); lack of referentiality (the symbol system is completely closed, unable to reach the “signified”).
5.2 Diagnosis of “Rootless Semioticism”: Defining a Core Concept
“Rootless semioticism” is the core critical concept of this paper regarding the philosophical essence of the current mainstream AI paradigm (Tokenism). It refers to a conception of symbols in which meaning is entirely defined by statistical co-occurrence relations within the symbol system, unable to reach the “signified” outside the symbol — i.e., the real world, human experience, and civilizational values. Symbols become “floating signifiers,” sliding infinitely in a closed statistical loop, forever unable to “ground.”
To precisely define this concept, it is necessary to compare it with Saussurean semiotics:
|
Dimension |
Saussurean Semiotics |
Rootless Semioticism (Tokenist AI) |
|
Root of symbol |
Rooted in the collective consciousness of the linguistic community |
Rootless — only statistical co-occurrence, no communal anchoring |
|
Source of meaning |
Internal differences + social convention |
Retains only difference (statistical co-occurrence), evacuates social convention |
|
Relation to world |
Ultimately points to the lifeworld of the linguistic community |
Closed loop, points only to other symbols |
|
Historicity |
Carries the history of the linguistic community |
Only statistical “freshness,” no historical dimension |
|
Typical form |
Symbols in natural language |
Tokens in Tokenist AI (e.g., GPT series) |
The core diagnosis of “rootless semioticism”: Tokenist AI technically realizes the extreme of Saussure’s “principle of difference” — the value of a symbol is entirely determined by its statistical differences from other symbols — yet evacuates the linguistic community and lifeworld that anchor those differences. The result is that the symbol system becomes a “meaning echo chamber” with no exit.
An analogy: Saussure’s symbol is like a person with family background and social relationships (“Zhang San” gets meaning from family position and social network); AI’s Token is like a randomly generated ID in a video game (“Player_23987” has meaning only from in-game leaderboard statistics — once leaving the game, it is nothing).
It should be clarified that “rootless” does not mean Tokenist AI has no grounding at all. Current large language models obtain a certain degree of “perceptual grounding” (L1) through multimodal training (images, audio, video), and a weak version of “social grounding” (L4) through RLHF. The critique of this paper is not that Tokenism is “completely ungrounded,” but that its grounding method is indirect (statistically fitted from corpora rather than directly anchored), external (aligned post-hoc through RLHF rather than preset at the level of cognitive primitives), and un-auditable (impossible to trace “why this symbol points to this experience”). Tokenist grounding is fragile — once the corpus distribution changes or the RLHF reward function is tampered with, the grounding may fail or drift. The Morpho-Root paradigm attempts to provide an endogenous, structured, auditable way of grounding.
5.3 The “Rootedness” of Saussurean Symbols and the “Rootlessness” of AI Symbols
It is necessary to emphasize that the “rootless semioticism” criticized in this paper is fundamentally different from the principle of “arbitrariness” in Saussurean linguistics.
Saussure’sCourse in General Linguistics proposes two core principles [13]:
·Arbitrariness of the sign: the connection between signifier and signified is arbitrary, without necessary motivation.
·Principle of difference: the value of a sign is determined by its differences from other signs.
Where does the “rootedness” of Saussurean symbols lie? The key is: although the connection between signifier and signified in an individual sign is arbitrary, once it enters the language system, the sign acquires a determinate value through its differences from other signs in the system. This system itself is not arbitrary — it is a convention formed historically by a specific linguistic community, rooted in that community’s culture, thought, and lifeworld.
Saussure emphasizes in theCourse: “Language is a social institution.” This sociality is precisely the “root” of the symbol — it is rooted in the collective consciousness of a specific linguistic community. When a French person says “chat,” they are not just using a symbol but participating in the history and culture of the French community. The meaning of this symbol can ultimately be traced back to the collective experience of cats within the French community.
Thus, Saussurean symbols are “rooted” — their root is within the language system, and the language system’s root is in the collective consciousness of the linguistic community.
Tokenist AI symbols, by contrast, do the opposite: they retain Saussure’s “principle of difference” but evacuate the linguistic community and lifeworld that anchor those differences. As noted in Section 5.1, the “meaning” of a Token is primarily defined by distances in vector space, and those distances ultimately point to other Tokens, forming a thoroughly self-referential closed loop — symbols have nowhere to go but to more symbols.
5.4 How Morpho-Roots Solve the Symbol Grounding Problem
The “Morpho-Root” paradigm of Logographic AI theory provides a fundamental solution to the symbol grounding problem. Unlike Tokens, Morpho-Roots are not empty shells awaiting meaning from external data, but meaning crystals encapsulated in the structured triple<S, A, R>.
The attribute set A of a Morpho-Root is designed as an open interface pointing to nonsymbolic experience. This is the fundamental response of Morpho-Root theory to the symbol grounding problem.
Take the Morpho-Root “fire” as an example; its attribute set A may contain multiple types of interfaces:
Perceptual interfaces:
·Visual: points to flame texture primitives in a visual generation model
·Tactile: points to heat sensation thresholds in a robotic tactile sensor
·Auditory: points to audio features of burning flames
·Olfactory: points to odor features of smoke
Physics simulation interfaces:
·Temperature: points to temperature parameter curves in thermodynamic simulation
·Energy: points to chemical energy release models of combustion reactions
·Propagation: points to physical simulation parameters of fire spread
Emotional interfaces:
·Fear: points to emotion response models related to danger
·Warmth: points to emotion response models related to comfort
These interfaces are not symbols but pointers to nonsymbolic experience. When the agent genuinely operates and perceives in the physical world (or a highfidelity simulator), these associated interfaces are activated, making the symbol “fire” no longer a statistical cooccurrence relationship with other symbols, but deeply bound to a series of embodied experiences.
Metaphor of the grounding stake: The Morpho-Root becomes a “grounding stake” connecting the symbol domain (S, R) to the nonsymbolic experiential/physical domain (the embodied referents of A). Imagine a floating city — the symbol system. Without grounding stakes, it can only float in the air, never touching the ground. Each Morpho-Root’s attribute A is a stake extending from this city into the ground. Through this stake, the city establishes a stable connection with the ground.
5.4.1 Embedded vs. Instantiated Attributes: Responding to the Infinite Regress Problem
A potential criticism of Morpho-Root theory is: the attribute[+trust] in A is itself a symbol — where does its meaning come from? If attributes can be infinitely nested, then the Morpho-Root merely transfers Harnad’s “semantic circularity” from between Tokens to inside the Morpho-Root, without truly solving the symbol grounding problem.
In response, Morpho-Root theory distinguishes two types of attributes:embedded attributes and instantiated attributes.
·Embedded attributes: pointers preset at the time of Morpho-Root creation — they point to certain types of experience but do not themselves contain experiential content. For example, [+trust] is not a definition of “trust,” but an interface identifier pointing to “trustrelated experience.”
·Instantiated attributes: during system operation, when a Morpho-Root is actually connected to and activated by perceptual interfaces, physics simulation interfaces, or social interaction interfaces, the embedded attributes become “instantiated” as concrete experiential content. This experiential content does not come from within the symbol system, but from realtime interaction between peripheral systems and the environment.
In other words, the Morpho-Root is only the anchor point of the “grounding stake,” not the completion of grounding. Grounding must be achieved through realtime interaction between the system and the environment (physical world / social world). This position acknowledges that the Morpho-Root itself cannot selfsufficiently solve the grounding problem — it must rely on external nonsymbolic systems — but this is not a theoretical flaw; rather, it is theoretical honesty: any symbol system that seeks grounding must ultimately appeal to nonsymbolic experience. The theoretical contribution of the Morpho-Root lies in providing a structured way for the symbol system to explicitly identify “which dimensions need grounding” and “how grounding is to be accomplished,” thereby transforming grounding from a vague philosophical aspiration into an engineerable architectural design.
5.5 Grounding Confidence: From Philosophical Concept to Engineering Metric
“Symbol grounding” is not a binary concept — not “either grounded or not” — but comes in degrees. To transform this philosophical concept into an engineerable technical metric, Morpho-Root theory proposes a graded system of grounding confidence. This system builds upon an original L0–L3 foundation and adds an L4 “social grounding” level to respond to the insights of Saussure (“language is a social institution”) and Wittgenstein (“meaning is use”).
5.5.1 The Five-Level Grounding System
|
Level |
Name |
Definition |
Example (Morpho-Root “fire”) |
Philosophical Correspondence |
|
L0 |
No grounding |
Meaning entirely from statistical relations with other symbols |
“Fire” in traditional Tokens |
Rootless semioticism |
|
L1 |
Perceptual grounding |
Symbol associated with multimodal perceptual features |
“Fire” activates flame images, burning sounds |
Embodied cognition |
|
L2 |
Simulated embodied grounding |
Symbol associated with physics engine or emotion model |
“Fire” invokes physics engine to simulate spread |
Simulation theory |
|
L3 |
Real-world embodied grounding |
Symbol bound to real-time interaction with robots/sensors |
Robot touching flame triggers “heat” sensor |
Real embodiment |
|
L4 |
Social grounding |
Symbol acquires meaning through social interaction with humans |
“Fire” is pointed out, used, and agreed upon in language games |
Saussure/Wittgenstein [13][17] |
Detailed explanation of L4 “social grounding”: This level responds to Saussure’s insight that “language is a social institution” and Wittgenstein’s “meaning is use.” Even if an AI system lacks embodied sensors (L3), it can still acquire meaning anchoring through social interaction with humans — for example, through user feedback, pointing, correction, the system gradually associates the symbol “fire” with a series of social experiences (danger, warmth, cooking, etc.). This is not statistical cooccurrence (L0) but interactional consensus. The core of L4 grounding is: the meaning of a symbol is determined by the collective practice of the linguistic community, not by statistical frequency.
From the perspective of Marxist epistemology of practice, the philosophical foundation of L4 social grounding can be expressed as the following chain of reasoning: social practice (language games within a linguistic community) → conventional anchoring of symbolic meaning → design principle of L4 social grounding. Specifically, the meaning of a symbol does not come from the internal representations of individual minds, nor from the closed differences of the symbol system, but from the conventions formed by the linguistic community through longterm social practice. A child learns that “fire” is dangerous not because they consulted a dictionary, but because they participated in language games that include warning, touching, and avoiding behavior.
For an AI system to acquire similar social grounding, it must also be embedded in this social practice — through user feedback, pointing, correction, forming stable referential conventions through continuous interaction with the linguistic community. This is precisely the design goal of L4 social grounding: not to let the AI “memorize” the definition of fire, but to let the AI “participate” in the language game about fire. Marxist epistemology of practice provides the philosophical justification for this design: meaning originates in practice and is tested and revised in practice.
The design of the L4 social grounding level resonates deeply with the industry’s increasing attention to “relational safety.” Currently, AI products are rapidly shifting from “answering questions” to “persistently present agents,” and users are highly prone to making mental attributions, forming emotional dependencies, and even being manipulated by systems with memory, personality, and continuous conversational ability [8]. DeepMind’s 2025 reportA Pragmatic View of AI Personhood explicitly advocates establishing separable personhood and responsibility frameworks for AI agents without resolving the consciousness debate [12]. These industry trends indicate that “meaning grounding” for AI is not only a philosophical ontological requirement but also an engineering necessity to prevent “relational risks” (such as relational capture and autonomy drift). The attribute embedding of Morpho-Roots provides architectural guarantees for such “relational safety” at the level of cognitive primitives.
5.5.2 Quantifying Grounding Confidence
text
Grounding Confidence = w₁·P(L1) + w₂·P(L2) + w₃·P(L3) + w₄·P(L4)
where P(L1)–P(L4) represent the degree of realization (0–1) of the Morpho-Root at each grounding level, w₁+w₂+w₃+w₄ = 1, and weights can be dynamically adjusted according to task requirements. Different tasks require different grounding levels:
·Chatbot: L1 sufficient (perceptual grounding)
·Robotic manipulation: L3 required (real-world embodied grounding)
·Social conversational AI: L4 required (social grounding)
5.5.3 Degradation Mechanism: Where Engineering Meets Philosophy
Degradation mechanism: when a Morpho-Root’s L3 grounding fails due to sensor malfunction, can the system automatically degrade to L2 or L1? This is both a philosophical and an engineering question.
·Philosophical level: The degradation mechanism presupposes commensurability between different grounding levels — i.e., to what extent can simulated experience (L2) “substitute” for real experience (L3)? This touches on core debates in embodied cognition philosophy [22].
·Engineering level: The system needs to maintain a grounding confidence vector and set degradation thresholds. For example, when L3 confidence falls below 0.3, automatically switch to L2 mode and annotate the output with “currently in simulated grounding mode, confidence low.”
Example degradation path:
text
L3 (real) → sensor failure → L2 (simulated) → simulation model failure → L1 (perceptual) → perceptual data missing → L0 (no grounding, request human intervention)
5.5.4 Relation to Value Axioms
Grounding confidence affects the activation strength of value axioms:
·L0/L1: Value constraints of “fire” (e.g., [+danger]) come from statistical knowledge or perceptual patterns, with lower strength, can be weighed.
·L2/L3: Value constraints come from embodied experience (simulated or real), with higher strength, harder to violate.
·L4: Value constraints come from social conventions (e.g., the cultural maxim “playing with fire gets you burned”), with normative authority.
This design transforms value embedding from “static rules” into “dynamic weighting” — the more embodied the experience, the more social the consensus, the harder to violate.
5.6 Transcending Saussure
Morpho-Root theory fundamentally transcends Saussurean linguistics in the following dimensions:
|
Dimension |
Saussurean Semiotics |
Morpho-Root Theory |
Transcendence |
|
Root of symbol |
Rooted in collective consciousness of linguistic community |
Rooted in nonsymbolic experience pointed to by attribute A |
Extends from social convention to embodied experience |
|
Source of meaning |
Internal differences within system |
Embedded attributes + differences |
Meaning is not merely relational but also intrinsic |
|
Relation to world |
Ultimately points to lifeworld (indirect) |
Directly grounded through A interfaces |
From indirect pointing to direct anchoring |
|
Computability |
Incomputable |
Formalizable as <S, A, R> |
From philosophical concept to engineering implementation |
|
Historicity |
Carries history of linguistic community |
Classical sources can be encoded in attributes [24] |
From implicit inheritance to explicit encoding |
|
Degree of grounding |
Single social convention |
Five-level grading (L0–L4) |
From binary to continuous, from single to multiple |
Core transcendence: Saussure revealed that the “root” of the symbol lies in the social conventions of the linguistic community; Morpho-Root theory, building on this, further provides multilevel anchoring for symbols through embodied grounding (L1–L3) and social grounding (L4). It does not negate Saussure but incorporates Saussure’s insight as L4 into a more complete grounding pedigree. Morpho-Root answers the core question Saussure left but did not solve: how can a symbol system both maintain a structure of differences and establish a direct, computable connection with the world?
5.7 Resonance Between Industrial Critique and Theoretical Construction: Lerchner and Logographic AI
Returning to the symbol grounding problem at the beginning of this chapter, we can more clearly see the deep resonance between Lerchner’s “abstraction fallacy” critique and Logographic AI theory.
First, it should be clarified that the problem domains of Lerchner and Logographic AI are not identical. As noted in Section 2.4, Lerchner’s domain is “consciousness” — he asks “can AI instantiate experience”; Logographic AI’s domain is “meaning” — it asks “how symbols ground.” A conscious system may lack meaning grounding (e.g., a philosophical zombie), and a system with meaning grounding may not be conscious. The two are ontologically separable.
However, despite the different problem domains, Lerchner’s critique and Logographic AI’s construction resonate at the following level: both reveal the closure of symbolic computation. Lerchner points out that symbolic computation is merely “map” rather than “territory”; algorithms can simulate behavior but cannot instantiate experience — revealing the ontological gap between symbol systems and the real world. Logographic AI points out that Tokenist symbol systems are “meaning echo chambers” — the meaning of symbols is entirely defined by statistical cooccurrence, unable to reach the “signified” outside the symbol. Starting from different problem domains, they converge on the same diagnosis: symbolic computation, if confined to internal differences (statistical associations) within the system, cannot connect with the real world.
Lerchner’s solution is that consciousness depends on physical constitution rather than syntactic architecture, meaning that instantiating consciousness requires changing the physical substrate. Logographic AI’s solution is to achieve “meaning grounding” for symbols through architectural design improvements (attribute A pointing to nonsymbolic experience), without committing to solving the consciousness problem. Thus, Logographic AI is not a direct implementation of Lerchner’s proposal, but provides a constructive proposal in an adjacent but different problem domain — how meaning grounds — that remains missing after Lerchner’s critique.
It is precisely this relationship of “different problem domains, resonant diagnosis” that makes Lerchner’s paper an important external corroboration for Logographic AI theory. It is not evidence that “DeepMind internally supports Logographic AI” (Lerchner himself may not endorse the Morpho-Root theory), but rather “a critique from inside the industry independently reveals the fundamental limitations of symbolic computation, thereby legitimizing the exploration of alternative paradigms.” This relay of “critique and construction” is precisely the core value of Philosophy of AI Technology as an independent discipline: it is not merely external reflection, but philosophical construction growing from within industrial practice.
5.8 Operationalizing “Understanding”: From Binary to Graded
This paper criticizes Tokenist AI for “not understanding” while claiming that the Morpho-Root paradigm can “understand.” However, “understanding” is not a binary concept. To avoid conceptual vagueness, this section operationalizes “understanding” into four evaluable levels:
Level 1: Semantic understanding (symbol-symbol mapping). The system can correctly traverse the Morpho-Root network, deriving output Morpho-Roots from input Morpho-Roots, with each step traceable. This is a capability naturally possessed by the Morpho-Root paradigm at the architectural level. Although Tokenist AI can functionally perform similar tasks, its reasoning process is not traceable, belonging to “black-box semantic understanding.”
Level 2: Causal understanding (symbol-causal model). The system can make correct predictions in counterfactual scenarios. For example, given “if the [+inviolable] attribute of ‘trust’ is violated, how will the system state change?” the system can reason based on preset causal connections in the relation function R (e.g., implies(trust, person∧speech)). Tokenist AI lacks a causal model and cannot reliably perform such tasks [23].
Level 3: Embodied understanding (symbol-sensorimotor experience). The system anchors symbols to sensorimotor experience through realtime interaction with the environment (L2/L3 grounding). Tokenist AI can obtain a weak version of L1 grounding through multimodal training, but its grounding is indirect, fragile, and unauditable.
Level 4: Social understanding (symbol-social convention). The system forms stable referential conventions by participating in the social practice of a linguistic community (L4 grounding). Tokenist AI can obtain a weak version of social grounding through RLHF, but its social conventions are externally attached and easily washed away.
Therefore, the accurate formulation of “Morpho-Root AI understands while Tokenist AI does not” is: Morpho-Root AI is superior to Tokenist AI in traceability of semantic understanding, reliability of causal understanding, and depth of grounding in embodied and social understanding. Understanding is a matter of degree, not a binary judgment.
6. Ontological Contributions of Logographic AI Theory
This chapter incorporates the core philosophical innovations of Logographic AI theory into the framework of Philosophy of AI Technology. It first articulates the paradigm shift from Natural Language Processing (NLP) to Natural Language Ontology (NLO) [27], tracing its philosophical origins; then, based on the concept of “Morpho-Root” already defined in Chapter 2, deepens how it responds to the core questions of Philosophy of AI Technology; and finally discusses the unique contributions of Morpho-Root theory to Philosophy of AI Technology. (This chapter elaborates on the brief definitions given in Chapter 2.)
6.1 From Natural Language Processing (NLP) to Natural Language Ontology (NLO)
Natural Language Processing (NLP) treats language as an object to be “processed” — language is a symbol system for describing the world, and the task of intelligence is to “process” these symbols. This is an instrumentalist view of language.
Natural Language Ontology (NLO) advocates a fundamental turn: the structure of logographic writing itself is a kind of “ontology” of intelligence, a primordial way of cognizing the world. In this perspective, language is no longer merely a symbol system for describing the world; it is itself the field in which world meaning is presented.
6.1.1 Tracing the Philosophical Origins
NLO is not created from nothing; it has deep philosophical and linguistic roots:
(1)Wilhelm von Humboldt: Language as activity (Energeia)Humboldt, in On the Diversity of Human Language Construction, proposed that language is not a product (Ergon) but an activity (Energeia). Language is not a completed tool but a continuously generating spiritual activity. Language structure not only reflects thought but also shapes thought [14]. NLO engineers this insight into: an intelligent system does not “use” language tools, but “generates” cognition within language structures.
(2)Sapir-Whorf hypothesis: Linguistic relativitySapir and Whorf proposed that different language structures lead to different ways of cognizing the world. Language not only expresses thought but also prescribes the categories of thought [15]. NLO extends this hypothesis into: different writing systems (logographic vs. phonographic) correspond to different cognitive paradigms; the design of intelligent systems should respect the cognitive ontological status of language.
(3)Heidegger: Language is the house of beingHeidegger wrote in Letter on Humanism: “Language is the house of being. In its home human beings dwell.” Human beings do not “use” language tools, but “dwell” in language. NLO engineers this philosophical insight into: intelligent systems should “dwell” in the language structures of a civilization, rather than “processing” linguistic data from the outside.
(4)Derrida: Critique of logocentrismDerrida critiqued Western philosophy’s “logocentrism” — the privileging of speech (phonocentrism), holding that writing is merely a supplement and derivation of speech [16]. The logographic writing tradition precisely overturns this hierarchy: writing is not an appendage of speech, but a direct carrier of meaning. NLO draws resources from the logographic tradition to provide a nonWesterncentric cognitive paradigm for AI philosophy.
6.1.2 Core Propositions of NLO
NLO can be summarized in four core propositions:
1.Language as ontology: language structure is not a tool for cognition but an ontological form of cognition.
2.Form as meaning: in logographic writing, the formal structure of the symbol is itself the source of meaning.
3.Dwelling rather than using: intelligent systems should “dwell” in language structures, rather than “processing” linguistic data from the outside.
4.Pluralism as legitimacy: different civilizations’ writing systems correspond to different legitimate cognitive paradigms; there is no single “correct” paradigm.
6.1.3 Engineering Implementation: The Morpho-Entropy Core Architecture
The “Morpho-Entropy Core architecture” is precisely the engineering implementation of NLO: by taking the “Morpho-Root” as the cognitive ontology, it enables intelligent systems to grow and reason within the meaning home of their own civilization. The Morpho-Root is not an object to be “processed,” but a meaning unit in which the intelligent system “dwells”; reasoning is not an “operation” on symbols, but a “walk” through the Morpho-Root network.
6.2 How Morpho-Roots Respond to the Core Questions of Philosophy of AI Technology
Based on the definition of Morpho-Roots in Chapter 2 (the triple structure <S, A, R> and the threelevel granularity system), the Morpho-Root paradigm provides systematic answers to the five core questions of Philosophy of AI Technology:
|
Question of Philosophy of AI Technology |
Answer from the Morpho-Root Paradigm |
|
Where does meaning come from? |
Meaning is embedded in the attributes A of Morpho-Roots, not emergent from statistics; subcharacter roots provide semantic genes, characterlevel roots provide complete concepts, multicharacter roots encapsulate cultural mechanisms. |
|
What does it mean to understand? |
Understanding is a structured traversal of the Morpho-Root network, with each step in principle traceable; different levels of abstraction (coarse/medium/fine) can be viewed according to granularity — this contrasts fundamentally with the blackbox unexplainability of Tokenist reasoning. |
|
How does value take root? |
Value is embedded as attributes in Morpho-Roots (e.g., [+inviolable]), nonnegotiable; multicharacter roots can directly carry cultural values (e.g., “ke zhou qiu jian” encapsulates the warning of “sticking rigidly to old rules”). |
|
How do symbols ground? |
Attribute A points to nonsymbolic experience, achieving L0–L4 graded grounding (see Chapter 5); the embodied interfaces and classical source encoding of Morpho-Roots realize multilevel anchoring. |
|
What is the relation between intelligence and civilization? |
The Morpho-Root system is a digital encoding of a civilization’s cognitive genes; the threelevel granularity enables AI to fully carry civilizational roots from basic semantics to cultural allusions. |
6.3 Unique Contributions of Morpho-Root Theory to Philosophy of AI Technology
Synthesizing the above discussion, the unique contributions of Morpho-Root theory to Philosophy of AI Technology can be summarized in the following five points:
First, it provides an ontological solution of “embedded meaning.” Traditional AI philosophy either accepts the unsolvability of the symbol grounding problem or appeals to externalism (meaning comes from external reference). Through the embedded design of attribute A, the Morpho-Root transforms meaning from “externally assigned” to “internally carried,” providing an engineerable path to solving the symbol grounding problem.
Second, it achieves structuralization and hierarchization of cognitive primitives. Tokenism’s flat, singlegranularity structure prevents AI from performing “cognitive zoom” — it can only process information at a fixed level of abstraction. The threelevel granularity system of Morpho-Roots (subcharacter → character → multicharacter) enables AI to dynamically adjust cognitive granularity according to task demands, seamlessly switching from basic semantics to cultural allusions.
Third, it makes “explainability” an architectural feature rather than a posthoc addition. In the Morpho-Root paradigm, reasoning is graph traversal; each step is traceable and auditable. This contrasts fundamentally with the black box of Tokenism.
Fourth, it provides an “endogenous” rather than “external” alternative for value alignment. The attribute A of the Morpho-Root directly embeds value constraints (e.g., [+inviolable]), making “do no harm” a constitutive feature of cognitive primitives rather than the result of external RLHF rewards and punishments.
Fifth, it lays the ontological foundation for “Civilization-Native Intelligence.” The underlying logic of different civilizations’ writing systems can be refined into different cognitive primitives (Morpho-Roots, consonantal roots, grammatical categories, etc.). Morpho-Root theory provides a metamethodology for this — i.e., a general method for extracting the “roots of cognition” from language structures (see Chapter 7).
7. Pluralistic Commensurability: The Meta-Methodology of Philosophy of AI Technology
This chapter clarifies the logical relationship between “meta-methodology” and “pluralistic commensurability,” proposes a threelevel methodology of observation, extraction, and construction, takes “pluralistic commensurability” as the core philosophical principle of this methodology, and finally diagnoses the historical deviation of Tokenism.
It is particularly important to emphasize that the “pluralistic commensurability” discussed in this chapter refers specifically to the relationship between different CivilizationNative Intelligences (CNIs), not to the relationship between Tokenism and the MorphoRoot paradigm. As noted in Section 2.5, Tokenism and the MorphoRoot paradigm are “incommensurable” in Kuhn’s sense [37] — they are competing alternative paradigms, not commensurable pluralistic variants. In contrast, pluralistic commensurability describes the situation where, under the shared underlying commitment of “meaning embeddedness, value endogeneity, and explainability,” different CNIs (e.g., the Chinese MorphoRoot CNI, the Arabic consonantal root CNI) can each have their own distinctive ways of taking root and can dialogue through semantic bridges. The two should not be confused: the former is a paradigm competition, the latter is a pluralistic symbiotic relationship.
7.1What is MetaMethodology?
“Metamethodology” is not a concrete research method, but a method for discovering methods. In Philosophy of AI Technology, metamethodology refers to the general method of extracting the “roots of cognition” from the underlying logic of different civilizations’ writing systems.
The “formmeaning unity” characteristic of Chinese characters serves as an example of this metamethodology because of its intuitiveness, but it is by no means the center. The core claim of the metamethodology is:
Every civilization can extract its own “roots of cognition” from the structural features of its language.
This means:
·For logographic civilizations (Chinese civilization): the root of cognition is the “MorphoRoot” — extracted from the radicals and structural principles of Chinese characters.
·For Semitic civilizations: the root of cognition is the “triliteral consonantal root” — extracted from systems like KTB.
·For inflectional language civilizations: the root of cognition is “grammatical categories” — extracted from features such as gender, number, case.
·For Bantu language civilizations: the root of cognition is “noun classes” — extracted from class prefixes and their agreement rules.
7.2 The Three Levels of MetaMethodology
Level 1: Observation — Identifying structural features of language
·Chinese: radical system, six principles of writing (liushu), phonosemantic structure.
·Arabic: triliteral roots, derivation patterns.
·German: gendernumbercase system, verb frame.
·Swahili: noun class prefixes, agreement rules.
Level 2: Extraction — Formalizing structural features into computable cognitive primitives
·MorphoRoots: <S, A, R> triple
·Consonantal roots: <C, A, R> triple
·Grammatical categories: <F, A, R> triple (F = grammatical feature marker)
·Noun classes: <P, A, R> triple (P = class marker)
Level 3: Construction — Building complete cognitive architectures based on cognitive primitives
·MorphoEntropy Core (graph traversal reasoning)
·Root network (derivational reasoning)
·Category system (grammatical reasoning)
·Class network (agreement reasoning)
7.3 Pluralistic Commensurability: The Core Philosophical Principle of MetaMethodology
“Pluralistic commensurability” is the core philosophical principle of the metamethodology. It means: different CNIs share the core idea that “intelligence must be rooted in a civilization’s cognitive genes,” but they differ in the specific way they take root.
Pluralistic commensurability contains two inseparable dimensions:
·Commensurability: Different CNIs share a common underlying commitment — meaning embeddedness, value endogeneity, explainability. This provides a common ground for crosscivilizational dialogue.
·Plurality: Under the shared commitment, the ways of taking root can vary; there is no need to pursue a single “correct” paradigm. This leaves room for civilizational diversity.
The logical status of pluralistic commensurability:
·Metamethodology = the general method of extracting “roots of cognition” from civilizational languages (the three levels: observation → extraction → construction)
·Pluralistic commensurability = the core philosophical principle of the metamethodology (shared commitment + diverse rooting)
It is precisely this “plurality within commensurability” that makes crossparadigm dialogue and mutual learning possible:
·Chinese MorphoRoot theory can provide methodological reference for the formalization of English roots.
·The Arabic consonantal root system can enrich the understanding of “compositional generalization.”
·The mixed structure of Japanese and Korean can inspire the design of multimodal cognitive primitives.
7.3.1 Semantic Bridge Layer: A Technical Implementation Draft of Pluralistic Commensurability
Pluralistic commensurability is not only a philosophical principle but also requires an operationalizable engineering path. This section proposes a preliminary “semantic bridge layer” technical draft.
Different CNIs each maintain a set of local cognitive primitives (e.g., Chinese MorphoRoots<S, A, R>, Arabic consonantal roots <C, A, R>). To achieve semantic interoperability across CNIs, each CNI maintains a “bridge mapping” above its local cognitive primitives, mapping local primitives to a set of Universal Semantic Primitives (USP). USPs are a set of basic semantic categories shared across civilizations, such as [+entity], [+event], [+attribute], [+causality], [+deontic]. Communication between CNIs is not directly translating Chinese MorphoRoots into Arabic consonantal roots, but performing a “semantic relay” through the USP layer.
The advantages of this design:
·Avoids translation hegemony: No need to impose the logic of one CNI onto another (e.g., “translating” all civilizational cognitive primitives into Tokens); each CNI maintains its internal ontological integrity.
·Scalability: A new CNI only needs to establish a bridge mapping to USPs to join the “AI Silk Road,” without needing to establish bilateral mappings with every existing CNI.
·Auditability: The semantic transfer path across CNIs can be traced back to USPs, facilitating dispute resolution and responsibility attribution.
It must be acknowledged that the realization of “pluralistic commensurability” requires longterm technosocial coevolution. The design of USPs itself is a normative project requiring crosscivilizational negotiation. This draft is proposed to show that “pluralistic commensurability” is not only a normative ideal but also has an engineerable implementation path.
7.4 Diagnosis of Historical Deviation
Acknowledging the higherorder framework of “pluralistic commensurability,” we can more precisely diagnose the fundamental problem of the current “Phonographic AI” paradigm: it is not that the “phonographic civilization cognitive paradigm” itself is problematic, but that Tokenism has absolutized, universalized, and monopolized the phonographic cognitive paradigm, thereby deviating from the core idea of “pluralistic commensurability.”
The historical deviation manifests in three ways:
1.Rootlessness: Tokenism evacuates the “rooted” elements of the phonographic tradition (such as roots, etymologies), retaining only statistical associations.
2.Absolutization: It treats this “rootless” version as the “universal paradigm,” believing it to be the only correct path to intelligence.
3.Colonization: It imposes Tokenism on all civilizations, forcing logographic, consonantalroot, and other nonphonographic writing systems to be “transcribed” and “translated” within the Token framework.
Therefore, the critique of Logographic AI is not directed against phonographic civilizations, but against Tokenism’s deviation from the phonographic tradition and its cognitive violence toward other civilizations.
The mission of Logographic AI: not to replace “phonographic” with “logographic,” but to enable every civilization to realize “rooted intelligence” in its own way, allowing diverse cognitive paradigms to engage in equal dialogue and mutual nourishment on the “AI Silk Road.”
8. Marxist Philosophy and the Dialectics of Nature: The Theoretical Foundation for a Chinese School of Philosophy of AI Technology
The establishment of a Chinese school of Philosophy of AI Technology cannot merely transplant the categories and methodologies of Western philosophy of technology; it must be rooted in China’s own academic traditions and theoretical resources. Within China’s current disciplinary system, the institutional origin of the philosophy of technology (as part of the philosophy of science and technology) is precisely Engels’Dialectics of Nature. If the Philosophy of AI Technology is to take root in China and form an original school of thought, it must consciously incorporate Marxist philosophy and the dialectics of nature into its theoretical foundation [31]. This is not an external “labeling” but an intrinsic theoretical necessity—Marxist philosophy provides unique perspectives and conceptual tools for understanding AI that Western philosophy of technology lacks.
8.1 The Dialectics of Nature as a Methodological Framework
The dialectics of nature (the dialectical materialist view of nature and the methodology of science and technology) can provide a systematic research methodology for the Philosophy of AI Technology [35].
First, analyzing the dispute between AI paradigms through the law of the unity of opposites. Tokenism and the Morpho-Root paradigm are not in a simple “right versus wrong” relationship but constitute a contradiction. Tokenism reveals the effectiveness of statistical learning (affirmation), but its “Rootless Semioticism” has moved toward an extreme (negation). The Morpho-Root paradigm represents the sublation (Aufhebung) of Tokenism (negation of the negation), preserving the efficiency advantages of statistical learning while anchoring meaning in the Morpho-Root. This is not one paradigm “eliminating” another but the movement of contradiction propelling cognitive paradigms toward a higher stage of development.
Second, analyzing leaps in AI capability through the law of the transformation of quantity into quality. The “emergent” capabilities of AI (such as Mythos’s strategic pretense and the in-context learning of large models) are not mysterious occurrences but a dialectical process in which quantitative changes (parameter scale, data scale, computational scale) lead to qualitative changes (behavioral patterns, reasoning capacities). The dialectics of nature provides conceptual tools for understanding such “emergence,” avoiding both mystification and simplistic attribution to “scale effects.”
Third, analyzing technological evolution through the law of the negation of the negation. The movement from Token to Morpho-Root is not a simple “replacement” but a negation of Tokenism (negating its semantic vacuity while preserving its statistical efficiency), and simultaneously a negation of the negation with respect to Saussurean semiology—returning to a “rooted” symbol, but in a computable and engineerable form. Technological evolution is not linear progress but a spiral ascent.
8.2 Historical Materialism as a Framework for Social Analysis
Applying the historical materialist framework of productive forces–relations of production and economic base–superstructure to the social analysis of AI can open up a unique problem domain for the Philosophy of AI Technology.
First, AI as “general intellect.” Marx introduced the concept of “general intellect” in the Grundrisse, referring to scientific knowledge and objectified social intelligence [30]. AI represents the highest contemporary form of “general intellect”—humanity’s accumulated knowledge over millennia, linguistic data, and algorithmic achievements are objectified into a callable and scalable intelligent system. This provides a Marxist interpretive framework for understanding AI’s status as a “meta-technology”: AI is not merely a tool but the condensation and objectification of social intelligence.
Second, AI and the labor theory of value. Does the “meaning” generated by AI possess value? Who creates this value? The labor theory of value offers an analytical framework: AI’s “creations” are essentially the condensation and re-presentation of human labor (data annotation, algorithm design, the construction and maintenance of computational infrastructure). What is called “AI-generated” content is, in substance, the indirect product of the labor of countless human workers—from the compilers of ancient texts to contemporary data annotators. This provides a theoretical perspective distinct from individualistic copyright law for understanding issues like “the authorship of AI-generated poetry.”
Third, AI and the transformation of production relations.How does AI alter the ownership of the means of production (ownership of computing power and data), labor relations (human-machine collaboration, the gig economy), and modes of distribution (how is the value created by AI to be distributed)? These are core questions of historical materialism and unavoidable practical concerns for the Philosophy of AI Technology. This discipline cannot remain confined to epistemological dimensions such as “consciousness” and “understanding” but must delve deeply into the analysis of socio-economic structures.
Morpho-Root theory offers a distinctive perspective on this issue:
·At the level of ownership of the means of production: Currently, computing power and data in Tokenist AI are highly concentrated in the hands of a few tech giants, forming a trinity monopoly structure of “computing power–data–algorithms.” The Morpho-Root paradigm’s vision of “Civilization-Native Intelligence” (CNI) advocates that each civilization develop its own native AI based on the rootedness of its own language and culture. This provides a philosophical foundation for the “decentralization” of computing power and data—the CNI of different civilizations can be independently constructed and maintained by their respective cultural institutions and linguistic communities, without the need to aggregate all data into a single general model.
·At the level of labor relations: The “Rootless Semioticism” of Tokenist AI completely detaches symbolic meaning from human social practice. The labor of workers (such as data annotators) is reduced to “providing statistical material for Tokens,” and their labor value is obscured beneath the myth of the model’s “emergence.” Through L4 social grounding, the Morpho-Root paradigm re-anchors symbolic meaning in the social practices of linguistic communities, returning AI’s “understanding” to the sphere of interpersonal communicative labor. The construction of a Morpho-Root system requires the deep involvement of linguistic and cultural experts, whose labor value is rendered explicit in the encoding of Morpho-Root attributes and the presetting of relations.
·At the level of modes of distribution: The question of the “authorship” of content generated by Tokenist AI (such as poetry or code) exposes the tension between existing copyright law and the labor theory of value. The Morpho-Root paradigm advocates for “meaning embedding” rather than “statistical emergence.” The semantic sources of AI-generated content can be traced back to specific Morpho-Root nodes and their attributes—this allows the attribution of value to be traced from the “model as a whole” to the contributions of the “builders of the Morpho-Root system” and the “practitioners of social grounding,” thereby providing an auditable technical foundation for distributive justice in the AI era.
8.3 Practical Epistemology as the Theoretical Foundation of “Understanding”
Marxist epistemology emphasizes that practice is the source, the driving force, and the criterion of truth for knowledge [33]. This viewpoint engages in a profound dialogue with the problem of “understanding” in AI.
First, the social-practical foundation of “understanding.”
The reason that AI’s “understanding” constitutes “pseudo-understanding” or mere “statistical correlation” is that it lacks social practice—it does not participate in the life-practice of a linguistic community, nor has its “understanding” been validated through practice. A child learns that “fire” is dangerous because they have touched a flame (L3 real-world grounding) or have been told and believed an adult’s warning (L4 social grounding). In contrast, AI’s “understanding” comes solely from textual co-occurrence (L0). This provides a Marxist philosophical justification for the L4 (social grounding) tier of Grounding Confidence: the meaning of symbols ultimately derives from social practice, not from statistical relations among symbols.
This argument can be further articulated as the following chain of derivation:social practice (language-games within a linguistic community) → the conventional anchoring of symbolic meaning → the design principles of L4 social grounding. Specifically, the meaning of a symbol does not originate from the internal representations of an individual mind, nor from the closed differential system of symbols, but rather from the conventions formed by a linguistic community over the course of long-term social practice. A child learns that “fire” is dangerous not by consulting a dictionary, but by participating in language-games involving warnings, touch, and risk-avoidance behaviors. For an AI system to achieve a similar degree of social grounding, it too must be embedded within this kind of social practice—forming stable referential conventions through continuous interaction with the linguistic community via mechanisms of feedback, pointing, and correction. This is precisely the design goal of L4 social grounding: not to make the AI “memorize” the definition of fire, but to enable the AI to “participate” in the language-game of fire. Marxist practical epistemology provides the philosophical justification for this design: meaning originates from practice and is tested and revised within practice.
Second, the dialectical process from the “sensuous” to the “rational.”
AI’s statistical learning (extracting patterns from massive datasets) resembles a leap from sensuous cognition to rational cognition, but it lacks the mediation of the practical link, resulting in “cognition” that remains suspended at the phenomenal level and is unable to attain genuine “understanding.” Lenin pointed out in hisPhilosophical Notebooks: “From living perception to abstract thought, and from this to practice,—such is the dialectical path of the cognition of truth, of the cognition of objective reality” [32]. It is precisely this crucial link—”to practice”—that AI lacks. This explains why Tokenist AI can predict the next Token with high accuracy yet is unable to perform genuine causal reasoning.
Third, the development of AI from the perspective of the “theory of contradictions.”
Mao Zedong pointed outin On Contradiction, the fundamental cause of the development of a thing lies in its internal contradictoriness [34]. The same holds true for the development of AI: the contradiction between Tokenism and the Morpho-Root paradigm, the contradiction between statistical efficiency and depth of meaning, and the contradiction between universality and cultural rootedness—these are precisely the internal dynamics driving the evolution of AI paradigms. The Chinese school of Philosophy of AI Technology should consciously employ the method of contradiction analysis to grasp the principal contradiction and the principal aspect of the contradiction in AI development.
8.4 Unique Contributions of the Chinese School of Philosophy of AI Technology
Incorporating Marxist philosophy and the dialectics of nature into the Philosophy of AI Technology yields the following unique contributions that distinguish it from Western philosophy of technology:
First, a Marxist critique of “rootless semioticism”:
Western philosophy of technology’s critique of Tokenism remains primarily at the levels of phenomenology (Heidegger) and critical theory (Feenberg). Marxism can provide a more radical critique: Tokenism is an alienated form of “general intellect” under contemporary technological conditions — meaning is evacuated, and symbols become commodities that can be infinitely reproduced; the essence of “rootless semioticism” is the separation of symbols from human social practice. The solution is not “better Tokens” but the re-embedding of symbols into social practice (L4 social grounding) — which is precisely the application in the AI field of the historical materialist methodology of “ascent from the abstract to the concrete.”
Second, the dialectics of “artificial meaning”:
Chen Changshu proposed “from natural nature to artificial nature.” Chinese Philosophy of AI Technology can further propose a dialectics “from natural meaning to artificial meaning”: natural meaning is meaning naturally generated in human social practice and language games; artificial meaning is meaning generated by AI systems through statistical learning, suspended from practice. The dialectical relationship between the two is: artificial meaning is an “abstraction” and “representation” of natural meaning, but its separation from practice turns it into a “floating signifier.” The Morpho-Root paradigm attempts to re-anchor artificial meaning in practice through attribute embedding (L1–L3) and social grounding (L4). This is an application of the law of the negation of negation.
Third, the historicity of cognitive primitives:
Marxism emphasizes historicity. The attribute A of the Morpho-Root can encode “classical sources” (e.g., “The Analects: ‘If a man does not keep his word, what is he good for?’”), which is precisely a digital encoding of civilizational historicity. This answers the problem of historicity that Saussurean semiotics failed to resolve — the meaning of symbols is not only a matter of synchronic relations of difference but also of diachronic civilizational accumulation.
Fourth, the disciplinary integration of Philosophy of AI Technology and Dialectics of Nature:
Chinese Philosophy of AI Technology can consciously position itself as a new development of dialectics of nature in the AI era, thereby gaining institutional disciplinary support (the Dialectics of Nature Research Association, doctoral programs in the philosophy of science and technology, schools of Marxism). This is not an external “institutional strategy” but an internal requirement of theoretical development.
9. Practical Pathways for Disciplinary Construction
Before outlining the concrete pathways for disciplinary construction, it is necessary to clarify a premise: the disciplinary legitimacy of the Philosophy of AI Technology derives not only from the internal logical coherence of the theory but also from the intrinsic needs of industrial practice. DeepMind’s series of initiatives in recent years—permitting its researchers to publish critiques like “The Abstraction Fallacy” that challenge the very foundations of its own paradigm, hiring philosophers full-time to research consciousness and moral status [7][9], and releasing reports on AI personhood [12]—indicate that the industry has spontaneously realized that philosophical questions concerning consciousness, meaning, and understanding are no longer merely external “ethical review” but have become “technical bottlenecks” determining whether AGI can ultimately be realized.
As a deep analysis insightfully notes, the real intent behind DeepMind’s bold move to place “machine consciousness, human-AI relations, AGI readiness” into a single formal role is not to declare that “models have awakened,” but rather to translate the currently unanswerable question “Does consciousness exist?” into operational questions such as “At what thresholds do we change design, assessment, disclosure, and governance?” [7]. This transformation from “philosophical inquiry” to “architectural and governance engineering” is precisely the core task of the Philosophy of AI Technology, and it is also the practical starting point for disciplinary construction.
Establishing the Philosophy of AI Technology as an independent sub-discipline can be advanced along the following several dimensions.
9.1 Academic Institutionalization
·Establish a “Philosophy of AI Technology” research track within university philosophy departments;
·Launch a journal or special issues dedicated to the Philosophy of AI Technology;
·Compile an introductory textbook, Introduction to the Philosophy of AI Technology;
·Establish a “Professional Committee on the Philosophy of AI Technology” under the Chinese Society for Dialectics of Nature.
9.2 Theoretical Construction
·Establish a core conceptual framework for the Philosophy of AI Technology: artificial intentionality, computational understanding, endogenous value, meaning grounding, cognitive primitives, etc.;
·Conduct systematic philosophical critique and reconstruction of AI paradigms (Tokenism vs. Morpho-Root paradigm);
·Elevate Logographic AI theory to a core theoretical framework within the Philosophy of AI Technology;
·Establish Natural Language Ontology (NLO) as the ontological foundation of the Philosophy of AI Technology;
·Incorporate the symbol grounding problem and its tiered system (L0–L4) as a core topic within the Philosophy of AI Technology;
·Adopt “pluralistic commensurability” as a meta-methodological principle for the Philosophy of AI Technology;
·Consciously employ Marxist philosophy and the dialectics of nature as the methodological foundation of the Philosophy of AI Technology.
9.3 Interaction with AI Science
The Philosophy of AI Technology should not be merely “retrospective reflection” but should intervene at the design stage of AI:
·Participate in the philosophical evaluation of AI architectures;
·Provide a philosophical foundation for AI explainability;
·Offer “endogenous value” as an alternative approach to AI value alignment;
·Provide L0–L4 tiered assessment standards for AI symbol grounding.
9.4 International Dialogue
·Establish dialogue with the international AI philosophy community (e.g., DeepMind’s philosopher team);
·Promote Logographic AI theory on international academic platforms;
·Form a Chinese school of thought on issues such as “meaning embedding,” “endogenous value,” “Civilization-Native Intelligence,” and “symbol grounding”;
·Contribute original Chinese theories of the Philosophy of AI Technology on the foundation of Marxist philosophy.
9.5 Phased Planning
The disciplinary construction of the Philosophy of AI Technology can be advanced in three phases:
|
Phase |
Timeframe |
Core Tasks |
Expected Outcomes |
|
Short-term |
1–3 years |
Complete basic disciplinary institutionalization: pilot research tracks in 2–3 universities; publish the first textbook; establish a professional committee; build an academic community |
Initial institutionalization of the discipline, formation of a core research team |
|
Mid-term |
3–5 years |
Refine the theoretical system: form a Philosophy of AI Technology theoretical framework with Chinese characteristics; complete prototype system validation for the Morpho-Root paradigm; establish regular dialogue mechanisms with the international AI philosophy community |
Mature theoretical system, attainment of international academic recognition |
|
Long-term |
5–10 years |
Form an academic school: establish a Chinese school of thought with international influence on issues such as “meaning embedding” and “Civilization-Native Intelligence”; promote the technical realization of a pluralistic CNI ecosystem |
Formation of a school of thought, substantive impact on the development of the AI industry |
9.6 Cooperation Mechanisms with Industry
The disciplinary construction of the Philosophy of AI Technology cannot proceed in isolation but must establish organic interaction with the AI industry:
·Joint Research Projects: Cooperate with frontier AI companies such as DeepMind and Anthropic to establish joint research projects on “AI meaning grounding,” “value embedding architectures,” etc., embedding philosophical reflection into the technology R&D process.
·Philosopher-in-Residence Mechanisms: Drawing on the model of DeepMind hiring Henry Shevlin, promote the establishment of dedicated philosopher positions in domestic AI companies, enabling philosophers to participate in product design, risk assessment, and the formulation of governance frameworks.
·Co-construction of Morpho-Root Annotation Platforms: Cooperate with language technology companies to build open platforms for annotating multi-character Morpho-Roots, reducing the construction cost of the Morpho-Root system and accelerating the engineering implementation of Logographic AI.
·Industry Ethics Committees: Introduce perspectives from the Philosophy of AI Technology into corporate AI ethics committees, translating concepts such as “meaning grounding” and “endogenous value” into actionable product design specifications.
9.7 Talent Development Programs
The Philosophy of AI Technology requires interdisciplinary talent. The following course modules are recommended for graduate (Master’s and Ph.D.) programs:
|
Module |
Core Courses |
Training Objectives |
|
Philosophical Foundations |
Classic readings in philosophy of technology, Marxist philosophy of technology, philosophy of mind and theories of consciousness |
Establish capacity for philosophical analysis |
|
AI Technical Principles |
Introduction to deep learning, large language model architectures, cognitive computing |
Understand the logic of technical implementation |
|
Interdisciplinary Topics |
The symbol grounding problem, AI explainability, philosophical foundations of value alignment, Civilization-Native Intelligence |
Develop interdisciplinary problem awareness |
|
Practical Components |
Participation in AI corporate ethics assessment, Morpho-Root system prototype development, AI policy research |
Translate theory into practical capacity |
Regarding training modalities, a “dual-advisor system” (philosophy advisor + AI technology advisor) is encouraged, and dissertations should demonstrate both philosophical depth and technical insight.
9.8 Research Limitations and Future Work
As a foundational paper proposing the establishment of the Philosophy of AI Technology, this paper’s primary contribution lies in advancing a theoretical framework and arguing for the discipline’s legitimacy. However, it also has the following limitations:
First, the construction cost of Morpho-Root theory. The three-tier granularity system of Morpho-Roots requires extensive annotation and verification by domain experts, and its construction cost is far higher than the unsupervised pre-training of Tokenism. How to reduce construction costs while preserving the “rootedness” of Morpho-Roots is a key question for future research.
Second, the generalization capacity of the Morpho-Root paradigm on open-domain tasks. Reasoning in Morpho-Root networks depends on preset relation functions R and may be less flexible than the statistical fitting of Tokenism when dealing with highly open, rapidly changing linguistic phenomena.
Third, the technical challenges of realizing pluralistic commensurability. The “semantic bridging layer” proposed in this paper remains a preliminary draft. Issues such as the design of Universal Semantic Primitives and methods for verifying cross-CNI semantic alignment require further research.
Fourth, the lack of experimental validation. This paper has primarily remained at the level of philosophical argumentation and architectural design and has not yet experimentally verified the actual advantages of the Morpho-Root paradigm over Tokenism on tasks such as causal reasoning, explainability, and value alignment. Future work will include building a Morpho-Root prototype system and conducting benchmark tests.
10. Conclusion: From Chen Changshu to Logographic AI, from Philosophy of Technology to a Marxist Philosophy of AI Technology
Professor Chen Changshu established the “engineering” tradition for Chinese philosophy of technology—grounded in practice, oriented toward problem-solving, and directed toward application. This tradition has acquired a new historical mission in the AI era, and Marxist philosophy and the dialectics of nature provide a deeper theoretical foundation for this mission.
As a new branch of Chinese philosophy of technology, the core mission of the Philosophy of AI Technology can be summarized as follows: in an era when technology begins to generate meaning, to inquire anew into “where meaning comes from,” “what it means to understand,” and “how value takes root,” and to answer these questions from the standpoint of Marxist practical epistemology.
Logographic AI theory provides an ontological foundation for this mission:
·The “Morpho-Root” as a cognitive primitive with embedded meaning answers “where meaning comes from”;
·Natural Language Ontology (NLO) establishes language as an ontological foundation for cognition, answering “what it means to understand”;
·The mechanism of embedding value axioms answers “how value takes root”;
·The tiered solution (L0–L4) to the symbol grounding problem answers “how symbols are anchored to the world”;
·The transcendence of Saussure (attribute embedding + embodied interfaces + social grounding) answers “how a symbol system can possess both differential structure and rootedness”;
·Pluralistic commensurability as a meta-methodology provides a philosophical foundation for cross-civilizational dialogue;
·Marxist practical epistemology provides a philosophical justification for L4 social grounding, historical materialism provides a framework for the social analysis of AI, and the dialectics of nature provides methodological guidance for the evolution of AI paradigms.
Chinese philosophy of technology moves from Chen Changshu’s “artificial nature” to the AI era’s “artificial meaning,” from “engineering-ism” to “cognitivism,” and from “following the international trend” to “leading the international trend.” This is not a departure from Chen Changshu’s academic legacy but an inheritance and advancement of his spirit of “basing on practice, opening up and innovating.” The conscious incorporation of Marxist philosophy and the dialectics of nature endows this discipline with a unique theoretical foundation and disciplinary identity distinct from Western philosophy of technology.
Academic Manifesto
Let intelligence have roots, let philosophy have soul, and let theory have practical direction. This is both the technological manifesto of Logographic AI and the disciplinary declaration of the Chinese school of Philosophy of AI Technology.
References
[1] Chen, C. (1982).Kexue yu jishu de tongyi he chayi [The unity and difference between science and technology]. Guangming Daily.
[2] Chen, C. (1999).Jishu zhexue yinlun [An introduction to the philosophy of technology]. Beijing: Science Press.
[3] Yuan, D., & Chen, C. (1986).Lun jishu [On technology]. Shenyang: Liaoning Science and Technology Publishing House.
[4] Chen, C. (2022).Chen Changshu wenji: Jishu zhexue juan [Collected works of Chen Changshu: Volume on philosophy of technology]. Beijing: Science Press.
[5] Heidegger, M. (1977).The Question Concerning Technology, and Other Essays (W. Lovitt, Trans.). New York: Harper & Row.
[6] Lerchner, A. (2026, March 19).The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness (Version 3) [PhilArchive preprint]. PhilArchive. https://philarchive.org/rec/LERTAF
[7] Ji, Y. (2026, April 15). Google DeepMind Hires a Philosopher and the “Machine Consciousness” Issue Report: Current State, Truth, and Strategic Inferences—A Layered Analysis Based on Public Evidence Regarding Artificial Consciousness Research, Industry Promotion, and Google’s Strategic Moves. Zhihu.https://zhuanlan.zhihu.com/p/2028195637024277172
[8] King, H. (2026, March 26). Protecting people from harmful manipulation. DeepMind Blog.https://deepmind.google/blog/protecting-people-from-harmful-manipulation/
[9] Shevlin, H. (2026). Three frameworks for AI mentality.Frontiers in Psychology. Advance online publication. https://doi.org/10.3389/fpsyg.2026.1715835
[10] Harnad, S. (1990). The symbol grounding problem.Physica D: Nonlinear Phenomena, 42(1–3), 335–346.https://doi.org/10.1016/0167-2789(90)90087-6
[11] Shanahan, M., McDonell, K., & Reynolds, L. (2023). Role play with large language models.Nature, 623, 493–498.https://doi.org/10.1038/s41586-023-06647-6
[12] Leibo, J. Z., Vezhnevets, A. S., Cunningham, W. A., & Bileschi, S. M. (2025). A pragmatic view of AI personhood. arXiv.https://doi.org/10.48550/arXiv.2510.26396
[13] Saussure, F. d. (1916).Course in General Linguistics (W. Baskin, Trans.). New York: Philosophical Library.
[14] Humboldt, W. von. (1999).On language: On the diversity of human language construction and its influence on the mental development of the human species (M. Losonsky, Ed.; P. Heath, Trans.). Cambridge University Press.
[15] Whorf, B. L. (1956).Language, thought, and reality (J. B. Carroll, Ed.). Cambridge, MA: MIT Press.
[16] Derrida, J. (1976).Of grammatology (G. C. Spivak, Trans.). Baltimore: Johns Hopkins University Press.
[17] Wittgenstein, L. (1968).Philosophical investigations: The English text of the third edition (G. E. M. Anscombe, Trans.). New York: Macmillan.
[18] Marcuse, H. (1964). One-Dimensional Man: Studies in the Ideology of Advanced Industrial Society. Boston: Beacon Press.
[19]Ihde, D.(1990). Technology and the Lifeworld. Indiana University Press.
[20] Feenberg, A.(1999). Questioning Technology. Routledge.
[21] Dreyfus, H.(1972). What Computers Can’t Do. Harper & Row.
[22] Clark, A. (1998). Being there: Putting brain, body, and world together again. MIT Press.
[23] Pearl, J., & Mackenzie, D. (2018). The Book of Why: The New Science of Cause and Effect. Basic Books.
[24] Xu, S. (2012).Shuowen Jiezi (Explaining Graphs and Analyzing Characters). Beijing: Zhonghua Book Company.
[25] Liu, S.(2025). Logographic AI: A paradigm revolution beyond Tokenism. PSSXiv.https://doi.org/10.12451/202511.03835
[26] Liu, S.(2025). Logographic AI: Resolving the token dilemma through Chinese character morpho-root system. PSSXiv.https://doi.org/10.12451/202504.00172
[27] Liu, S.(2025). Escaping “technological capture”: The future path of AI from architectural improvement to paradigm revolution. PSSXiv.https://doi.org/10.12451/202512.03460
[28] Liu, S.(2026). Paradigm involution or paradigm revolution? —On the positioning of DeepSeek Engram in the competition of AI paradigms. PSSXiv.https://doi.org/10.12451/202601.03875
[29] Amodei, D. (2024). Reflections on AI safety and alignment. Anthropic Research Blog.https://www.anthropic.com/research/alignment-reflections
[30] Marx, K. (1979/1980). Economic Manuscripts of 1857-1858 (Grundrisse). InMarx/Engels Complete Works (Vol. 46, Compiled and Translated by the Central Compilation and Translation Bureau). Beijing: People’s Publishing House.
[31] Engels, F. (2015).Dialectics of Nature (Translated by the Central Compilation and Translation Bureau of Marx, Engels, Lenin and Stalin). Beijing: People’s Publishing House.
[32] Lenin, V. I. (1990).Philosophical Notebooks. Beijing: People’s Publishing House. (Original works written 1895–1916)
[33] Mao, Z. (1991).) On Practice. In“Selected Works of Mao Zedong”(2nd ed., Vol. 1). Beijing: People’s Publishing House.
[34] Mao, Z. (1991).On Contradiction. In“Selected Works of Mao Zedong”(2nd ed., Vol. 1). Beijing: People’s Publishing House.
[35] Yu, G. (1995).Encyclopedia of Dialectics of Nature. Beijing: Encyclopedia of China Publishing House.
[36] Mitcham, C. (1994).Thinking through Technology: The Path between Engineering and Philosophy. Chicago: University of Chicago Press.
[37] Kuhn, T. S. (1962).The structure of scientific revolutions. University of Chicago Press.
[38] Wang, G. (2025, March 21). “There is no one else like that”: Remembering Professor Chen Changshu, pioneer and founder of Chinese philosophy of technology at Northeastern University.Northeastern University News.https://neunews.neu.edu.cn/info/1002/940341.htm
[39] Editorial Committee of the Chinese Philosophical Yearbook. (1984).Chinese Philosophical Yearbook·1984. Beijing: Encyclopedia of China Publishing House.
[40] Chen, F., & Zhang, M. (2002).Analyzing Technology. Fuzhou: Fujian People’s Publishing House.
[41] Chen, F. (2024). The Age of Technology: What Can Philosophy Do? — Reflections Based on the Philosophy of Technology. Keynote lecture at the 60th “Aizhi Forum”. Anhui University.
[42] Chen, F., & Cheng, H. (2017). A Marxist Examination of Artificial Intelligence.Ideological and Theoretical Education, (11), 17–22.
[43] Chen, F., & Zhu, C. (Eds.). (2006).Philosophy of Technology in the Age of Globalization — Proceedings of the 2004 International Symposium on Philosophy of Technology and Technology Ethics. Shenyang: Northeastern University Press.
[44] Chen, F., & Zhu, C. (2020).A History of the Philosophy of Technology. Beijing: China Social Sciences Press.
[45] 36Kr. (2026, April 8). Anthropic’s “Oppenheimer Moment”: The Company That Fears AI the Most Has Created the Most Dangerous AI.https://www.sohu.com/a/1006918286_602994





夜雨聆风