“巴拉吉的困惑”——从“天网”到“孞联网”:理念、机制、技术 |
送交者: 孞烎Archer 2024年12月23日17:24:22 于 [天下论坛] 发送悄悄话 |
Balaji's confusion “巴拉吉的困惑”
——当前AI研发中的法律、伦理与共生困境分析与应对之道
钱 宏 Archer Hong Qian
OpenAI近年遭多家媒体指控使用未授权内容训练AI模型,比如《纽约时报》认为,“OpenAI几乎分析了网络上可取得的文本”,而从OpenAI离职继而自杀巴拉吉被媒体说成是“AI吹哨人”。
说到AI吹哨人,应该是最早发起OpenAI的马斯克和奥特曼,但随着AI大模型的问世及迅猛发展的势头,马斯克最早提出担忧而退出,居功至伟的伊利亚离开OpenAI,人们特别是OpenAI内部员工,如巴拉吉对AI产生困惑,是可以理解的。但是,怎么看待和怎样处理这种普遍存在的困惑,既涉及知识产权的分寸把握,更涉及AI(ANI、AGI)研发的不确定性把握,以及开源与闭源的关系问题。
我把以下四种情况和问题,统称为“马拉吉困惑”(Balaji's Confusion):
情况和问题1:前OpenAI研究人员巴拉吉(Suchir Balaji)为自己在GPT-4训练计划中协助公司从网络上搜集资料感到担忧,Balaji还接受纽约时报专访,认为OpenAI的作法违反美国著作权法规定,并发表文章论述说,自己的论点并非专门针对ChatGPT,也适用于许多生成式AI的产品。Balaji因此于2024年8月从OpenAI辞职不干了,11月发现他在加州自杀了,那么,Balaji的自杀和他从OpenAI的离职的原因之间有必然的直接关联吗?
情况和问题2:OpenAI首席科学家伊利亚(Ilya Sutskever)2024年5月,也离开了他工作十年并为生成式大型AI模型作出决定性贡献(山姆·奥特曼说“没有他,就没有现在的OpenAI”)的OpenAI,对巴拉吉8月的离职及自杀,有影响吗?
情况和问题3:一些媒体特别是自媒体夸大其词危言耸听,借此造作混淆视听,把巴拉吉说成是“AI吹哨人”,一会说AI研发没有“公开透明”将危害人类安全,一会儿拾2023年马斯克等千名科学家企业联名建议AI研发“停一停”的牙慧,背后的动机是什么?
情况和问题4:一边是媒体人将巴拉吉的自杀,爆炒为“公开透明”的问题,一边是真正的AI业内行家对“开源、闭源”问题发生激烈争议。比如,被誉为“AI教父”图灵奖得主辛顿(Geoffrey Hinton)在2024年7月则警告:“大模型开源的危害性,等同于将原子弹开源 ”,他批评同为图灵奖得主,现任Meta AI首席科学家,此前也是Google的首席科学家的杨立昆“已经疯了”。对此情况,早在2024年4月,共生学人钱宏(Archer Hong Qian)和英属不列巅哥伦比亚大学教授王泽华在《An open letter from Symbioscholars to the six giants in the AI world(共生学人致AI世界六巨头的公开孞)》中,也明确提出AI研发“开源或不开源,都应当是有条件的,要视时空意间状况而定”,不能“在要不要平衡与如何平衡:开源/闭源、放能/吸能、驱动/制动(发展/监管)、冲突/恊和、主权/人权⋯⋯问题上的争论上,过于拘泥就事论事。”
那么,如何看待这Balaji's Confusion的四种情况和问题,并找到因应之道呢?
一、OpenAI被指控使用未授权内容训练AI模型
媒体对OpenAI使用未经授权的内容进行训练的指控,确实引发了关于AI研发合法性和道德性的广泛讨论。
法律层面:这类指控涉及著作权法的复杂问题,尤其是在“公平使用”(Fair Use)的框架下。AI公司通常会辩解其训练数据的使用属于非商业性、教育性或研究性目的,且对内容进行转化性使用,不是简单复制。然而,这种辩护在各国法律体系下的适用性不一。例如,美国的“公平使用”原则可能会提供某种保护,但在欧盟或中国,类似的行为可能更难以被视为合法。
道德层面:即便在法律上可能有争议性豁免,未经许可使用内容的问题也暴露了数据使用的透明性不足。公众和内容创作者对生成式AI背后的数据来源保持疑虑,这种不透明会损害行业的公信力。
在共生哲学(Symbiosism)视角下,这种行为被认为违背了人与社会、人与知识之间的共生关系。知识资源的生产者与使用者应通过协商建立公平的互惠机制,而非以技术便利为借口。
三、巴拉吉的离职与自杀的关联性
巴拉吉的离职和自杀之间是否存在直接关联,外界尚无确凿证据。不过,有几点需要注意:
心理因素:如果他对自己参与的工作产生了深刻的道德困境,这可能对心理健康造成压力。然而,自杀的成因通常是多重复杂的,未必仅仅与职业相关。
制度因素:离职后的无助感可能加剧个体对生活的消极评价。如果他在离职后缺乏支持系统(例如社交网络或职业替代选择),这种孤立可能对他的心理状态造成严重影响。
需要指出的是,在伦理困境和组织压力下离职的员工,往往成为“内部告密者”(whistleblower)或“伦理诉求者”(ethical advocate)。社会舆论过度放大其行为,也可能增加其精神负担。大概是这个原因,所以马斯克对此仅仅在X社交平台发了一个:Hmm(唔)!
三、伊利亚的离职是否影响巴拉吉的决策
伊利亚的离职可能间接加重了巴拉吉的决策压力:
榜样效应:伊利亚作为OpenAI的核心人物,其离职可能暗示内部决策或价值观出现重大问题。巴拉吉可能因此认为,自己对于“伦理问题”的关注并非孤立的担忧。 团队动态:领导人物的离开会引发组织内部的不稳定。对年轻研究员而言,失去这样的精神支柱,可能放大其孤立感和道德疑虑。
不过,伊利亚的离职主要涉及其对OpenAI未来发展方向的战略分歧,而非对训练数据来源的法律与伦理担忧。因此,两者之间的关联性更多是心理或象征层面的。
四、关于开源与闭源的争论
这一问题涉及技术发展、社会责任和全球安全之间的微妙平衡:
开源的益处:开源促进技术透明性和公平性,降低技术垄断风险,有助于小型企业和学术机构平等参与竞争。
开源的风险:正如辛顿所警告,开源可能将强大工具置于不当使用者手中,其危害性如同“将原子弹开源”。这一比喻虽极端,但提醒了AI技术一旦落入恶意行为者之手,其潜在后果不容忽视。
共生哲学的视角:钱宏和王泽华在《共生学人致AI世界六巨头的公开信》(http://symbiosism.com.cn/8183.html)中提出的“时空意间”论点,为这一争论提供了有力的框架。开源或闭源不应一刀切,而应根据“特定时空的共生状态”做出动态权衡。这种哲学强调:
平衡:技术发展与监管之间的平衡,避免片面追求效率或安全。
勰商:建立跨国界、跨领域的共生协定,推动AI开发者与政府、学术界和公众的对话。
伦理驱动:将AI的设计与部署嵌入明确的伦理框架中,确保其应用符合人类的整体利益。
因此,辛顿与杨立昆的争论可以看作是这一议题在学术界的具体表现:一方主张风险控制,另一方强调技术开放。然而,过度极化的观点,可能忽略了共生哲学所主张的“第三条道路”——既不是绝对开源,也不是彻底闭源,而是条件性的、动态调整的治理策略。
五、 AI法律、伦理与共生困境的因应之道
数据使用透明化:OpenAI的数据使用问题需要建立更透明的机制,化解创作者与技术开发者之间的信任危机。
伦理困境与心理影响:巴拉吉事件揭示了伦理困境对AI从业者心理健康的深远影响,但其与自杀的直接关联仍需进一步调查。
内部价值观冲突:伊利亚的离职体现了AI行业内部价值观的冲突,对巴拉吉的离职可能存在间接性影响,但主要是心理层面的关联。
开源与闭源的平衡:开源与闭源的争论需要避免极端化。在共生哲学框架下,提出“条件性治理”的有效路径,根据时空状态动态调整技术开放或限制的尺度。
凡事交互主體共生,孞念改變一切:用孞念的力量改變生活,最終改變世界(Minds Change Everything: Using the Power of Minds to Change Lives and Ultimately the World)。对于如何从技术与伦理价值上“解决AI研发中的法律、伦理与共生困境”问题,共生学人基于公元前8世纪东方伟大的思想家伯阳父“和实生物,同则不继”和公元前5世纪思想家老子“天网恢恢,疏而不失”的理念,创造性地提出建构交互主体共生的孞联网(MindsNetwoking)构想。
这里的“天网”,讲的是“天之道”,即赋有自然秩序的生态圈。就是说,这个“生态圈天网”是生命演化与环境演变的自相互作用的呈现,构成一个能够自我调节“疏而不失”的,适宜包括人类在内的地球生灵生活的“超级有机体”(Superorganism)地球生态圈。英国科学家洛夫洛克(Lovelock),继马古利斯(Lynn Margulis)提出“关于真核生物和早期原核生物之间产生共生的构想”(1970)之后,借用古希腊神话中的大地女神盖亚(Gaia)的形象(1972),描述了在生命演化与环境演变的交互共生(Symbiogenesis)过程中,所有的生物、植物、动物、微生物的演化与空气、海洋、岩石的理化演变,以及人、事、物进化,都在一张相互关联的新陈代谢强弱代偿,犹如一张宽阔广大(恢恢)游刃有余(不失)的天网中进行。
那么,在这个生物圈天网中,什么东西配得上如此宽阔广大游刃有余呢?只能是God赋予人的心性孞念(Mind),所以,雨果说“世界上最宽阔的是海洋,比海洋更宽阔的是天空,比天空更宽阔的是人的心灵。”所谓“一念天堂,一念地狱”(One Mind of heaven, one Mind of hell),就是说人及其人造之事物,处于什么状态,是由心性孞念决定的。所以,根据这个道理,建构一个孞联网,让人及AI处于其中,对自己行为的是非、价值进行实时评估的激励或抑制机制,不仅比任何政府性监控都“疏而不失”,而且,降本高效赋能。当马斯克的太空高速互联网——星链(Starlink)计划,将被升级赋予孞联网(MindsNetworking或MindsWeb)的内涵时,就成为地球生态圈天网的一种技术支撑。
共生学人相孞,在孞联网中,人与人、人与AI,可望实现这样的生活方式与价值目标:存同尊异,交互主体,生生不息,间道共生!
六、从“天网”到“孞联网”:理念、机制、技术、哲学
孞念(Minds)作为人类心性的核心,决定了人类及其创造的AI在共生网络中的行为状态。基于此,共生学人提出孞联网(MindsNetworking)构想,旨在利用孞念力量引领技术与伦理的深度融合。
(1) 天网恢恢:自然秩序与生态网络
“天网恢恢,疏而不失”源自东方哲学中的“天之道”,是自然秩序与生态网络的象征:
特性: 开放而精准,宽广而有序,体现生命演化与环境演变的交互共生。
生态圈天网: 由生物、环境及其相互作用构成的超级有机体(Superorganism),为地球上的生灵提供可持续的生存环境。
英国科学家洛夫洛克的“盖亚假说”进一步揭示了这一生态网络的动态平衡特性,展现自然的自组织与自我调节能力。
(2) 孞念决定状态:人类心性的力量
心灵的广阔: 雨果曾言,“世界上最宽阔的是海洋,比海洋更宽阔的是天空,比天空更宽阔的是人的心灵。”
思想的影响: “一念天堂,一念地狱”体现了思想与价值观对行为及其结果的根本性影响。
孞联网是一种基于实时评估与反馈机制的交互网络,其运行逻辑包括以下核心要素:
(1) 实时评估
多维度行为分析: 从真伪、善恶、美丑、智慧愚昧、正误、神性魔性六个维度动态评估人类与AI的行为。
全息式追踪: 通过数据透明与可追溯性,实现行为的全面记录与分析。
(2) 激励与抑制
激励机制: 强化正向行为与价值创造,推动人与AI协同增效。
抑制机制: 动态制约负向行为与风险偏差,确保系统的伦理与法律合规性。
(3) 自组织调节
模仿天网生态的“疏而不失”特性,实现行为的自我调整与优化。
孞联网的构建依赖于多项尖端技术的整合,为其运行提供坚实基础:
(1) 量子协同机制 利用量子超越性(Quantum Superposition)与量子关联性(Quantum Coherence),实现思想的高效交互与全球协作。
(2) 区块链与人工智能结合 区块链: 提供去中心化的行为记录与评估体系,确保数据安全与透明。 AI伦理模块: 赋予AI实时伦理分析与动态调整能力。
(3) 全球网络支持 升级马斯克“星链”(Starlink)计划,构建覆盖全球的高速交互网络,支持孞联网的数据流通与实时评估功能。
(1) 存同尊异:和实生物 尊重多样性,通过交互实现主体之间的深度协作与相互赋能。
(2) 生生不息:疏而不失 借助实时评估与反馈机制,动态调整人与AI的行为,实现生态系统的持续进化。
(3) 间道共生:技术与伦理的双向赋能 通过孞念的力量推动技术赋能伦理,反哺技术的发展,实现人与AI、人与自然的协同共生。
结语:从天网到孞联网
孞联网作为共生哲学的创新应用,通过“天网恢恢,疏而不失”孞念的力量,展现技术与伦理结合的新可能性。大大超越了传统监管模式,通过实时评估与反馈机制,将人类与AI纳入一个动态共生的生态网络。未来,孞联网将推动人类与AI迈向“存同尊异、交互主体、生生不息、间道共生”的新时代,实现技术赋能伦理、伦理推动技术的双向共生,造福全人类与地球生态。
Balaji's Confusion
— Analyzing and Addressing Legal, Ethical, and Symbiotic Challenges in Current AI Development By Archer Hong Qian
In recent years, OpenAI has faced accusations from various media outlets of using unauthorized content to train its AI models. For instance, The New York Times claimed that “OpenAI has analyzed almost all available texts on the internet.” The suicide of former OpenAI employee Suchir Balaji, following his departure from the company, has been framed by the media as the action of an "AI whistleblower." When it comes to AI whistleblowers, the earliest figures were likely Elon Musk and Sam Altman, who co-founded OpenAI. However, as large-scale AI models rapidly developed, Musk withdrew due to his concerns, and key contributor Ilya Sutskever eventually left OpenAI. Employees like Balaji reportedly struggled with ethical uncertainties regarding AI, which is understandable. Addressing these uncertainties requires careful consideration of intellectual property issues, the inherent unpredictability of AI development (ANI, AGI), and the debate over open-source versus closed-source approaches. I categorize the following four situations and issues under what I call "Balaji’s Confusion":
(1)、Balaji’s Concerns: Former OpenAI researcher Suchir Balaji expressed concerns over his role in assisting the company to collect data from the internet for GPT-4’s training program. He believed OpenAI's practices violated U.S. copyright law and stated that his critique applied not only to ChatGPT but to many generative AI products. After leaving OpenAI in August 2024, he was found dead in California in November. Is there a direct and inevitable link between his departure from OpenAI and his suicide? (2)、Ilya’s Departure and Its Influence: In May 2024, OpenAI’s Chief Scientist, Ilya Sutskever, who had worked there for ten years and made decisive contributions to generative AI models, left the organization. (Sam Altman commented, “Without Ilya, OpenAI as we know it wouldn’t exist.”) Did Ilya’s departure influence Balaji’s decision to leave or his eventual suicide? (3)、Media Sensationalism: Certain media, particularly self-media, exaggerated and distorted Balaji’s story. They claimed that his suicide highlighted AI’s lack of transparency, often echoing the call made by Musk and a thousand scientists in 2023 to pause AI development. What motives lie behind this sensationalism? (4)、Open vs. Closed-Source Debate: While some in the media sensationalized Balaji’s suicide as an issue of transparency, real debates among AI experts revolved around open-source and closed-source approaches. Geoffrey Hinton, the “Godfather of AI,” warned in July 2024 that "open-sourcing large models is as dangerous as open-sourcing nuclear weapons." He criticized Yann LeCun, Chief AI Scientist at Meta, for supporting open-source, calling him “mad.” Earlier, in April 2024, I proposed, alongside Dr. Wang Zehua of the University of British Columbia, in An Open Letter from Symbioscholars to the Six Giants in the AI World, that decisions on whether AI development should be open- or closed-source must depend on specific spatiotemporal conditions. It’s crucial not to remain narrowly focused on the binary debates of open-source/closed-source, enabling/controlling, and other dichotomies. How should these situations be understood and addressed?
The allegations against OpenAI for using unauthorized data for training have sparked widespread discussion on the legality and ethics of AI development.
From a Symbiosism perspective, such practices violate the symbiotic relationship between humans and society, as well as between humans and knowledge. Producers and users of knowledge resources should establish fair, reciprocal mechanisms through dialogue rather than exploiting technological advantages.
While no conclusive evidence directly links Balaji’s departure and his suicide, certain factors are worth noting:
In ethical dilemmas and organizational pressures, employees who resign often become whistleblowers or advocates for reform. The intense public scrutiny of their actions can add to their psychological burden—perhaps this is why Musk merely commented "Hmm" about Balaji on social media.
Ilya’s resignation may have indirectly amplified Balaji’s decision-making pressures:
However, Ilya’s decision was primarily driven by strategic differences over OpenAI’s future and not directly linked to concerns over data use or ethics.
The debate involves balancing technological progress, social responsibility, and global safety:
Symbioscholars propose "MindsNetworking" as an innovative response, integrating real-time evaluations of behavior to incentivize positive actions and discourage negative ones. Rooted in ancient wisdom—such as Laozi’s Heavenly Net concept—MindsNetworking aligns ethics with technological development to create a symbiotic ecosystem where humans and AI coexist harmoniously.
The "Heavenly Net" refers to the natural order, or the Way of Heaven, encompassing the Earth’s ecological sphere. This "ecological Heavenly Net" represents the interaction between life evolution and environmental transformation, forming a self-regulating "Superorganism" that sustains life, including humanity, in an optimal balance. British scientist James Lovelock, building upon Lynn Margulis’s concept of symbiosis between eukaryotic and prokaryotic cells (1970), introduced the "Gaia Hypothesis" in 1972, naming it after the Greek goddess Gaia. This hypothesis portrays life forms—plants, animals, microorganisms—and physical elements like air, oceans, and rocks as intricately interwoven in a symbiotic process. These interactions, akin to the dynamic flux of metabolic compensations, create an expansive and finely tuned Heavenly Net.
In this vast ecological net, what governs the system’s vastness and precision? Only the human mind (Minds)—bestowed by God—possesses the capacity to harness such a network. Victor Hugo eloquently captured this truth: "The ocean is vast, the sky even vaster, but the human mind is the most expansive of all." The idea that "One Mind of Heaven, One Mind of Hell" suggests that the state of humanity, and its creations (including AI), stems from the intentions and nature of human thought. Based on this philosophy, the concept of MindsNetworking emerges—a system where both humans and AI can exist in a shared framework, with real-time evaluations of moral and ethical behavior. This approach incentivizes positive actions and discourages harmful ones. Compared to traditional governance or monitoring systems, MindsNetworking is both more cost-effective and efficient while maintaining the balance of "openness and order." When Musk’s high-speed internet project, Starlink, evolves into MindsNetworking, it could become the technological backbone of the Earth’s ecological Heavenly Net.
Symbioscholars envision that, within the MindsNetworking framework, humans and AI can achieve a way of life rooted in the following values:
The transition from the Heavenly Net to MindsNetworking encapsulates the philosophy of using the power of Minds to lead technological and ethical integration. Below are the four pillars: Philosophy: From Natural Order to Minds’ Influence (1) The Heavenly Net: Natural Order and Symbiosis
(2) The Minds’ Role in Defining State
Mechanisms: Real-Time Evaluation and Feedback MindsNetworking relies on dynamic mechanisms that encourage responsible behavior and address ethical challenges: (1) Real-Time Multidimensional Analysis
(2) Incentives and Deterrents
(3) Self-Regulation Inspired by the Heavenly Net’s openness and precision, the system enables autonomous optimization through real-time adjustments. Technology: Infrastructure of MindsNetworking The implementation of MindsNetworking relies on a fusion of advanced technologies: (1) Quantum Collaboration Mechanisms
(2) Blockchain and AI Integration
(3) Starlink Integration
Ethics: The Future Vision of Intersubjective Symbiosis MindsNetworking offers a groundbreaking approach to align technology with ethics: (1) Diversity in Unity: (2) Endless Evolution: (3) Dual Empowerment:
MindsNetworking, as an innovative application of symbiosis philosophy, embodies the integration of ethics and technology. It transcends traditional governance models by incorporating real-time evaluation and feedback mechanisms, creating a dynamic ecosystem where humans and AI coexist symbiotically. Looking forward, MindsNetworking is set to usher humanity and AI into a new era of “Respect Differences, Embrace Commonalities; Interactive Subjects; Endless Vitality,” achieving mutual empowerment of ethics and technology for the collective good of humanity and the Earth’s ecology.
|
|
|
|
实用资讯 | |