| Don''t Believe AI Hype (Bilingual 雙語) 不要相信人工智能炒作 |
| 送交者: 無套褲漢 2025年03月25日15:24:43 於 [天下論壇] 發送悄悄話 |
|
Don't Believe AI Hype, This is Where it's Actually Headed | Oxford's Michael Wooldridge | AI History (Bilingual 雙語) 不要相信人工智能炒作,這才是它真正的發展方向 | 牛津大學的 Michael Wooldridge | 人工智能歷史 https://www.youtube.com/watch?v=Zf-T3XdD9Z8 Johnathan B Professor Wooldridge's book on the History of AI: https://amzn.to/41Fqlt6 (affiliate): Transcripts Transcript for Interview with Michael Wooldridge Read the Full Interview Transcript. Johnathan Bi Mar 11, 2025 0. Introduction Oxford's Michael Wooldridge is a veteran AI researcher and a pioneer of agent-based AI. In this interview, Wooldridge is going to take us through the entire 100-year history of AI development, from Turing all the way to contemporary LLMs. Now you might wonder, why should we bother studying the history of a technology? Technology should be about the state of the art techniques. It should be about the latest developments. What's the point of revisiting the old and outdated? It turns out there's at least two important reasons to study this history. And the first is that it'll help you better anticipate its future. Wooldridge is extremely skeptical that this iteration of AI will lead to the singularity, partially because he's lived through so many similar cycles of AI hype. Whenever there's a breakthrough, you always get apocalyptic predictions pushed by religious zealots, which sets up unreasonable expectations that eventually sets the field back. Studying this history of boom and bust will help you see through the hype and grasp at where the technology is really going. The second and even more valuable reason to study this history is that it contains overlooked techniques that could inspire innovations today. Even for me, someone who started coding when I was 15, who studied CS in college and then went to build a tech company, Wooldridge uncovered entire paradigms of AI that I haven't even heard of. Wooldridge's claim is that these paths not taken in AI, the alleged failures weren't wrong, they were just too early. And so this 100-year history is a treasure trove of ideas that we should look back to today for inspiration. 1. The Singularity Is Bullshit Johnathan Bi: This is the provocative chapter title in part of your book, The Road to Conscious Machines. The singularity is bullshit. Why is that? Michael Wooldridge: There is this narrative out there, and it's a very popular narrative and it's very compelling which is that at some point machines are going to become as intelligent as human beings and then they can apply their intelligence to making themselves even smarter. The story is that it all spirals out of our control. And of course this is the plot of quite a lot of science fiction movies, notably Terminator. I love those movies just as much as anybody does. But it's deeply implausible, and I became frustrated with that narrative for all sorts of reasons, one of which is that that narrative, whenever it comes up in serious debate about where AI is going and what the risks are, there are real risks associated with AI. It tends to suck all the oxygen out of the room in the phrase that my colleague used. And it tends to dominate the conversation and distract us from things that we should really be talking about. Johnathan Bi: Right. In fact, there is a discipline that's come out of this called existential risk. It's kind of the worrying about the Terminator situation and figuring out how can we perhaps better align these super-intelligent agents to human interests. And if you look at not just the narrative, but actually the funding and what the smartest people are devoting their time into thinking in not only companies, but policy groups, ex-risks, existential risk is the dominant share of the entire market, so to speak. Why do you think this narrative has gained such a big following? Michael Wooldridge: I think it's the low probability but very, very high risk argument that I mean, I think most people accept that this is not tremendously plausible. But if it did happen, it would be the worst thing ever. And so very, very, very high risk. And when you multiply that probability by the risk, then the argument is that it's something that you should start to think about. But when the success of large language models became apparent and ChatGPT was released and everybody got very excited about this last year, the kind of debate around this sort of reached sort of slightly hysterical levels, and it became slightly unhinged at some point. My sense is the debate has calmed down a little bit and is being more focused on the actualities of where we are and what the risks are. Johnathan Bi: Right. I think that's quite a charitable reading, I think, of the psychology. It's a rational calculus. There's a small probability, but there's a large sort of cost. I study religious history. And when I talk to people in the ex-risk world, the psychology kind of reminds me of the Christian apocalyptics. There's these people throughout Christian history that are like, now's the time. This happened most recently, probably, when we were going through the millennium, right, in 1999. And it's this psychological drive that wants to grab at something Total and eschatological in a way to orient the entire world. And so people I guess what I'm trying to highlighting is it may maybe you can see some of the psychology and climate risk as well. It's not to say that these things are true, right? It's not to say that the world is ending in Christianity, that climate isn't changing, or there is no ex-risk. It's that the reason that people seem attracted to this narrative is almost a religious phenomenon. Michael Wooldridge: I think that's right. And I think it appeals to something almost primal and kind of human nature. I mean, its most fundamental level is the idea that you create something, you have a child and they turn on you. That kind of the ultimate nightmare for parents. You give birth, you nurture something, you create something and then it turns on you. Johnathan Bi: Zeus and Cronus, right? It's an archetype. Michael Wooldridge: Exactly. Or, and this, that narrative, that story is very, very resonant. And for example, you go back to the original science fiction text, Frankenstein. That literally is the plot of Frankenstein. You use science to create life, to give life to something, to create something, and then it turns on you and you've lost control of that thing. So it's a very, very resonant idea, I think. It's very easy for people to latch on to. Johnathan Bi: Right. It's easy for us to critique the psychology here, but what do you think is wrong or what do you think people miss about the argument itself that once we have super intelligent or at least on par with human level machine intelligence that they can recursively improve upon themselves? What do you think people are missing when they give too much weight to that argument? Timestamps 00:00 0. Introduction 02:45 1. The Singularity Is Bullshit 14:28 2. Alan Turing 25:55 2.1 Alan Turing: The Turing Test 32:22 3. The Golden Age 39:29 4. The First AI Winter 41:25 5. Expert Systems 51:35 6. Behavioral AI 57:08 7. Agent-Based AI & Multi-Agent Systems 1:05:45 8. Machine Learning 1:08:18 9. LLMs Gödel's theorem debunks the most important AI myth. AI will not be conscious | Roger Penrose (Nobel) https://www.youtube.com/watch?v=biUfMZ2dts8 This Is World Feb 22, 2025 What differentiates us from the machines? | 5 perspectives | Penrose, Noble, Millar, Aaronson, Bach https://www.youtube.com/watch?v=lvDIZM0hYXM The Institute of Art and Ideas Jan 24, 2025 Five leading thinkers, from different disciplines, share their perspectives on a question that becomes more pressing with each passing day: What, if anything, differentiates us from the machines? 00:00 Introduction 00:53 Denis Noble: Stochasticity 05:13 Scott Aaronson on AI skeptics 07:44 Roger Penrose: The collapse of the wave function 15:31 Scott Aaronson on Roger Penrose 16:51 Scott Aaronson: Ephemerality 20:20 Joscha Bach on Roger Penrose 21:48 Roger Penrose on his critics 22:07 Isabel Millar: Embodiedness 22:57 Joscha Bach: Self-organisation 26:09 Isabel Millar on structures of meaning 27:50 Why AI won't be humanlike for very long This speech accidentally exposed the truth about the US https://www.youtube.com/watch?v=ywmpea6vvOE Geopolitical Economy Report Mar 21, 2025 US Vice President JD Vance gave a speech about globalization that inadvertently revealed the truth about the US empire, the goal behind the new cold war on China, the economics of imperialism, and how the Trump administration is serving billionaire Big Tech oligarchs in Silicon Valley at the expense of the working class. Ben Norton explains. US Big Tech CEOs admit they want AI monopoly: • Tech CEOs admit they want AI monopoly... https://www.youtube.com/watch?v=4jniPAD2uoo Topics 0:00 (CLIP) JD Vance excerpt 0:38 US vice president speech 1:03 Preparing for war on China 3:15 Summary of Vance's speech 3:56 (CLIP) Marco Rubio on China "threat" 4:39 Deindustrialization 5:02 (CLIP) JD Vance vows "industrial comeback" 5:41 Uniting billionaires and "populists" 6:26 Neoliberalism 7:01 JD Vance's patron Peter Thiel 8:22 Trump recruits Big Tech billionaires 9:34 For monopoly, against competition 10:21 (CLIP) Peter Thiel loves monopolies 10:35 Elon Musk and Trump 11:05 Billionaire Marc Andreessen 11:47 (CLIP) Trump admin loves Silicon Valley 12:08 Trump coalition: billionaires & workers? 13:37 (CLIP) Techno-optimists vs populists? 14:21 Big Tech manifesto 15:56 Scapegoating China 17:13 (CLIP) JD Vance scapegoats China 17:34 JD Vance calls China "biggest threat" 18:53 (CLIP) JD Vance scapegoats China 19:12 Neoliberal globalization 20:07 (CLIP) JD Vance on globalization 21:18 Neoliberal globalization 22:07 Imperialism & dependency theory 24:11 China's development 24:35 US bans Chinese competitors 26:15 (CLIP) JD Vance on China's AI 26:40 US Big Tech monopolies 27:23 (CLIP) "Competition is for losers" 27:38 Trump's tariffs 28:36 Jake Sullivan's industrial policy speech 29:16 (CLIP) Jake Sullivan on Washington Consensus 31:36 Industrial policy 32:45 Tech war on China 33:31 Trump's strategy 34:01 (CLIP) JD Vance on US shipbuilding 34:53 China's Shipbuilding 35:46 State-owned enterprises 38:07 US government-owned factories 40:32 Industrial policy 43:41 (CLIP) Tax cuts on rich & deregulation 44:07 Reaganism 2.0 44:28 Historical tax rates on rich 45:48 Oligarchs avoid taxes 46:52 Trump boosts deficit & debt 47:38 Fake industrial policy 49:09 (CLIP) JD Vance is "fan" of Big Tech 49:28 Andreessen Horowitz investments 50:30 S&P 500 stock buybacks & dividends 51:03 Reaganomics 51:59 Trumponomics 53:21 Tariffs & wealth transfer 53:55 Outro * My comment: Does Darwenian amoeba have intelligence? Single-celled amoebae are brainless therefore lack of feeling or thinking, but they can remember, make decisions and anticipate change, thus, they do possess early intelligent behavior after being diversified at least 750 million years ago. (The universe is 13.5 billion years old.) When comparing today's AI to ancient amoebae, we may say that the former has to speed up its efforts. The key point for their fundamantal discrepancy lies in the fact that the animal's conscious power comes from their analog system of computation whereas that AI from the digital system. The two systems being the unity and struggle of opposites cannot be reconciled, although the analog system could adapted the digital system because it is the more powerful of the two systems of computations, namely, human can manipulate the digital to their advantage. However, the digitized AI being at a disadvantage will fail to aquire the consciousness originated from the analog system, which has evoluted over billions of years. We should avoid the mistake of seeing the wood for the trees - only looking at the details and ignoring the whole. In a nutshell, AI can be called a smart encyclopedia based on machine evolution (s.e.b.o.m.e). Another question that needs to be answered is whether AI inspired humanoid and/or industrial robatics will replace human labor. The correct answer has to linger on whether or not one consigns Marxism to "the ash heap of history”as Anthony Dolan suggested and did. Marxism taught us that all capitalist profit comes from the surplus value created by the unpaid surplus labor of the workers and not from machines. Thus, robatics do not produce profit for the capitalists. Marx called machines, shop buildings and facilities, the constant capital, and wages, the variable capital. Only the living labor can create value for the capital whereas the dead labor contained in machines, for an example, cannot. Machines contain dead surplus-labor value for the machine-making-capitalists, for which the buyers have paid, hence machines contribute no new labor value all by themselves anymore. Employment of machines such as robots for the purpose of total replacement of labor will not be a profitable business strategy, unless machines are manufactured in-house so as to allow the manufacturer to capture the inherent live surplus value internally. This is why big capitals invest huge amount of funds big times all automation-related constant capitals. It's true that machines can do a lot of things which labors cannot and they should be considered to be a perfect solution to burdensome problems of either lack of labor or limitations of labor power. But this won't be universally realized in the world's last private ownership sysem of means of production or the capitalist system. The exception to this is that industrial robats tend to be implemented more often than humnoid robats because the former can relieve burdens of heavy and repetitive human's laboring conditions, hence can save investment in variable capital or wages. Nonetheless, robotics do not create private profit for the capitalists. Robatics and AI, in general, are most suitable to a system whose means of production belong to the state under the working class dictatorship, rather than to individuals. In other word, the internal contradiction of capitalism - private ownership of the means of production and socialization of production has restrained more advanced progresses than being realized. It is reported that the advanced team of OpenAI will charge customers $220,000 per month for using its top-of-the-line product, in view of the heavy legal and financial costs. Monopolization of sebome eventually will take over the manufacture business and lead to over-production together with over-unemployment, causing commercial, financial and politico-economic crises. It is reasonable to redefine sebome as a more realistic AI than usual. Instead of being a machine competing with human's brain power, it should descend to a lower level than AI and should content with simulation of a brainless organism such as the amoeba. Hopefully, as the machine evolutes, the smart encyclopedia will become smarter so as to be able to simulate with invertebrate organisms with primitive brain power such as the flatworms and jellyfish and so on. [Mark Wain 2025-03-25] * {The speechwriter Anthony Dolan gave Ronald Reagan the phrase “evil empire” to describe the Soviet Union and in another address consigned Marxism to “the ash heap of history.” Dolan died at 76. } * 漢語譯文 Johnathan B Wooldridge 教授關於人工智能歷史的書:https://amzn.to/41Fqlt6(附屬機構): 成績單 Michael Wooldridge 訪談成績單 閱讀完整訪談成績單。 Johnathan Bi 2025 年 3 月 11 日 0. 介紹 牛津大學的 Michael Wooldridge 是一位資深的人工智能研究員,也是基於代理的人工智能的先驅。在這次採訪中,Wooldridge 將帶我們回顧人工智能發展的整個 100 年歷史,從圖靈一直到當代法學碩士。現在你可能會想,我們為什麼要費心研究一項技術的歷史?技術應該是關於最先進的技術。它應該是關於最新的發展。重溫舊的和過時的東西有什麼意義?事實證明,研究這段歷史至少有兩個重要的原因。首先,它會幫助你更好地預測它的未來。 Wooldridge 極度懷疑 AI 的這種迭代是否會導致奇點,部分原因是他經歷過許多類似的 AI 炒作周期。每當有突破時,你總會聽到宗教狂熱分子所宣揚的末日預言,這會產生不合理的預期,最終導致該領域倒退。 研究這段興衰史將幫助你看透炒作,了解技術真正的發展方向。研究這段歷史的第二個更有價值的原因是,它包含了被忽視的技術,這些技術可能會激發當今的創新。即使對於我這樣一個 15 歲就開始編程、在大學學習計算機科學、然後去創辦科技公司的人來說,Wooldridge 也發現了我從未聽說過的整個 AI 範式。Wooldridge 聲稱,AI 中未採取的這些路徑,所謂的失敗並沒有錯,只是為時過早。所以,這 100 年的歷史是一座思想寶庫,我們今天應該回顧它來尋找靈感。 1. 奇點是胡說八道 Johnathan Bi:這是您書中《通往意識機器之路》中一個頗具挑釁性的章節標題。奇點是胡說八道。為什麼會這樣? Michael Wooldridge:有這樣一種說法,它非常流行,而且非常引人注目,即在某個時候,機器會變得像人類一樣聰明,然後它們可以運用自己的智慧讓自己變得更聰明。故事是,這一切都超出了我們的控制範圍。當然,這也是很多科幻電影的情節,尤其是《終結者》。我和任何人一樣喜歡那些電影。但它完全不可信,我出於各種原因對這種說法感到沮喪,其中之一就是,每當嚴肅討論人工智能的發展方向和風險時,這種說法都會帶來與人工智能相關的真實風險。它往往會用我同事的話說,把房間裡的氧氣都吸走。它往往會主導談話,分散我們對真正應該談論的事情的注意力。 Johnathan Bi:是的。事實上,有一種學科由此產生,叫做生存風險。它有點像對終結者情況的擔憂,並弄清楚我們如何才能更好地將這些超級智能代理與人類利益結合起來。如果你不僅看這種說法,而且看資金,以及最聰明的人在公司、政策團體和前風險中投入的時間,可以說,生存風險占據了整個市場的主導地位。你認為為什麼這種說法會獲得如此多的追隨者? Michael Wooldridge:我認為這是一個概率低但風險極高的論點,我的意思是,我認為大多數人都接受這種說法,這種說法不太可信。但如果真的發生了,那將是最糟糕的事情。風險非常非常高。當你將這個概率乘以風險時,那麼這個論點就是你應該開始考慮這件事了。但是,當大型語言模型的成功變得顯而易見,ChatGPT 發布後,去年每個人都對此感到非常興奮,圍繞這種爭論達到了一種有點歇斯底里的程度,在某個時候變得有點失控。我的感覺是,辯論已經平靜了一些,更多地關注我們目前所處的現實以及風險是什麼。 Johnathan Bi:是的。我認為這是對心理學的一種相當寬容的解讀。這是一種理性的計算。概率很小,但代價很大。我研究宗教史。當我與前風險世界中的人們交談時,這種心理讓我想起了基督教的末世論。基督教歷史上總有人說,現在是時候了。最近發生的一次,可能是在我們進入千禧年的時候,對吧,1999 年。正是這種心理驅動力想要抓住某種完全的、末世論的東西,以某種方式來不是整個世界。所以,我想強調的是,也許你們也能看到一些心理和氣候風險。這並不是說這些事情都是真的,對吧?這並不是說基督教的世界正在終結,氣候沒有變化,或者沒有前風險。人們似乎被這種敘述所吸引的原因幾乎是一種宗教現象。 邁克爾·伍爾德里奇:我認為這是對的。我認為它訴諸於某種幾乎原始的、某種人性。我的意思是,它最根本的層次是你創造了一些東西,你有一個孩子,他們卻背叛了你。這對父母來說是終極噩夢。你生下孩子,你養育孩子,你創造了一些東西,然後它卻背叛了你。 約翰納森·畢:宙斯和克洛諾斯,對吧?這是一個原型。 邁克爾·伍爾德里奇:沒錯。或者,這個,那個敘述,那個故事非常非常有共鳴。例如,回到最初的科幻小說文本《弗蘭肯斯坦》。這確實是《弗蘭肯斯坦》的情節。你用科學創造生命,賦予某物生命,創造某物,然後它反過來攻擊你,你就失去了對那東西的控制。所以我認為這是一個非常非常有共鳴的想法。人們很容易接受它。 Johnathan Bi:是的。我們很容易批評這裡的心理學,但你認為有什麼問題,或者你認為人們忽略了這一論點本身,即一旦我們擁有超級智能或至少與人類水平的機器智能相當,它們就可以不斷自我改進?當人們過分重視這一論點時,你認為人們忽略了什麼? 時間戳 00:00 0. 簡介 02:45 1. 奇點是胡說八道 14:28 2. 阿蘭·圖靈 25:55 2.1 阿蘭·圖靈:圖靈測試 32:22 3. 黃金時代 39:29 4. 第一個人工智能寒冬 41:25 5. 專家系統 51:35 6. 行為人工智能 57:08 7. 基於代理的人工智能和多代理系統 1:05:45 8. 機器學習 1:08:18 9. 大語言模型 哥德爾定理揭穿了最重要的人工智能神話。人工智能不會有意識 | 羅傑·彭羅斯(諾貝爾獎得主) https://www.youtube.com/watch?v=biUfMZ2dts8 這就是世界 2025 年 2 月 22 日 五位來自不同學科的頂尖思想家就一個日益緊迫的問題分享了各自的觀點:我們與機器之間到底有什麼區別? 00:00 簡介 00:53 Denis Noble:隨機性 05:13 Scott Aaronson 談 AI 懷疑論者 07:44 Roger Penrose:波函數的坍縮 15:31 Scott Aaronson 談 Roger Penrose 16:51 Scott Aaronson:短暫性 20:20 Joscha Bach 談 Roger Penrose 21:48 Roger Penrose 談他的批評者 22:07 Isabel Millar:具身性 22:57 Joscha Bach:自組織 26:09 Isabel Millar 談意義結構 27:50 為什麼 AI 不會長期像人類一樣 * 美國前勞工部長羅伯特賴希談美國經濟的真相:揭秘政治、道德與財富分配的秘密,為什麼你努力工作卻無法擺脫貧困?是誰在幕後操控着規則?|作為政商集團的一部分,他這是背叛自己的階級?|準備好顛覆三觀了嗎? https://www.youtube.com/watch?v=Qd29CoDw0Os 宏觀洞察 Dec 25, 2024 #社會真相 #財富分配 你以為唐納德·特朗普是美國危機的根源? 🤯 不!他只是一個結果!本期視頻,我們將深入揭露美國經濟和社會深層腐朽的真相,打破你對“自由市場”的所有幻想! 我們將揭示制度性缺陷和權力失衡如何加劇不平等,並打破 “勤勞致富” 的神話。 👉 你將收穫: 了解經濟、政治和道德之間的複雜聯繫 看清富人如何通過政治獻金和遊說操縱規則 理解全球化和自由貿易背後的陰暗面 辨識企業福利和政府補貼的本質 獲得獨立思考的能力,不再被“神話”蒙蔽 💡深度解析 特朗普現象的根源: 特朗普的崛起並非偶然,而是美國社會長期存在的不平等和對機構不信任的體現。 經濟學與道德政治的交織: 經濟學並非純粹的科學,而是一門涉及價值觀和政治選擇的學科。 市場規則由權力決定: 自由市場的規則並非天然,而是由政治和權力博弈決定的。 富人通過政治影響力獲利: 大公司、華爾街和富人通過政治捐款和遊說等手段操縱規則,為自身牟利。 財富分配不均: 收入和財富分配不均並非完全基於個人努力,而是與權力、制度和出身密切相關。 “勤勞致富”的幻覺: 努力工作並不一定能帶來財富,社會階層固化,普通人上升通道受阻。 自由貿易的負面影響: 自由貿易並非對所有人有利,導致美國製造業崗位流失、工人利益受損。 “社會主義”的污名化: 美國並非真正的社會主義,富人享有大量福利,普通人卻難以獲得社會保障。 就業創造的真相: 真正的就業機會來自中下階層的消費支出,而非富人的投資和企業減稅。 通貨膨脹並非簡單問題: 壟斷企業和寡頭壟斷推高價格,而並非簡單的政府支出和工資上漲。 經濟增長並非永遠有益: 無限增長給環境造成嚴重破壞,需要轉向可持續發展模式。 不平等是民主的威脅: 貧富差距加劇社會分裂,導致民粹主義和極端主義興起。 需要系統性變革: 僅僅譴責特朗普的支持者並不能解決問題,需要從根本上改變經濟和政治制度。 #社會真相 #權力運作 #財富分配 宏觀洞察 * 面對AI新科技,沒有人是局外人《AI底層真相》|天下文化 Podcast 讀本郝書 EP32 https://www.youtube.com/watch?v=hsC5na_u2aY Feb 20, 2025 天下文化 相信閱讀 Podcast 在本集的天下文化Podcast中,主持人郝旭烈介紹穆吉亞的《AI底層真相》一書。副標題「如何避免數位滲透的陰影」點出這本書的重點,不在於教導讀者如何使用AI,或展示AI帶來的正面效益,而是揭露AI背後的多重面貌,提醒我們不能只看到它光鮮亮麗的一面,更要正視其潛藏的負面影響。 書中提出AI帶來的四大隱憂:勞動剝削、深度偽造、全面監控與個資濫用。 AI的發展並非全然自動化,它依賴大量資料進行訓練,而這些資料需要人力標記。穆吉亞揭露了所謂的「隱形工人」——這些在AI產業中默默付出的人,經常遭到剝削。其中一個例子是肯亞的薩碼公司,這是一家專門提供AI訓練服務的公司,聘用了大量來自低收入國家的員工,負責數據標記和內容審查。 這些員工的工作條件極其惡劣,薪資微薄,且經常接觸暴力、虐待和色情等令人身心受創的內容。長期下來,這些工作對他們的心理健康造成巨大影響,許多人出現創傷後壓力症候群(PTSD),甚至需要依靠抗憂鬱藥物。有些人因心理創傷而無法與家人親近,影響正常生活。 穆吉亞指出,AI產業表面上看似先進和自動化,但實際上背後仍存在嚴重的剝削現象,尤其是在數據標記和內容審查等需要大量人力的環節。他認為這種現象與過去的殖民主義和傳統外包產業的剝削模式如出一轍。AI並沒有真正解決這些結構性的問題,反而加深了全球貧富差距和勞動剝削。 此外,過去我們常說「有圖有真相」,但在深度偽造技術普及的今天,「有圖不一定有真相」已成現實。因此,當我們面對影像或資訊時,一定要保持警惕和批判思維,不要輕信網路上的一切內容,尤其是在社交媒體上流傳的圖片和影片。 最後,郝哥提醒聽眾,AI技術的發展帶來便利和創新,但同時也伴隨著隱私侵害、勞動剝削和資訊偽造等問題。我們應該在享受AI帶來的好處的同時,保持對其潛在風險的認識,並思考如何在這個數位時代中保護自己和他人的權益。 * 我的評論: 達爾文變形蟲有智慧嗎? 單細胞變形蟲沒有大腦,因此缺乏感覺或思考,但它們可以記憶、做出決定並預測變化,因此,它們在至少7.5億年前分化後確實具有早期的智能行為。(宇宙已經有 135 億年的歷史了。) 如果將當今的人工智能與古代變形蟲進行比較時,我們可以說前者還有很長的路要走。 兩者根本差異的關鍵在於,動物的意識能力來自其模擬計算系統,而人工智能建立在數字系統上。它們是兩個統一的對立面,彼此鬥爭而無法調和,儘管模擬系統可以適應數字系統,因為它是兩個計算系統中更強大的。因此,處於弱勢的數字化的人工智能將無法獲得來自經過數十億年的進化過程的模擬系統的意識。 我們應避免只見樹木不見林 - 只看局部而忽視整體 - 的錯誤。 簡而言之,人工智能可以稱為基於機器進化的聰明百科全書(s.e.b.o.m.e,也就是塞博梅或 a smart encyclopedia based on machine evolution.) 另一個需要回答的問題是,人工智能啟發的類人機器人和/或工業機器人是否會取代人類勞動力。 正確的答案取決於人們是否像安東尼·多蘭所建議和所做的那樣,將馬克思主義扔進了“歷史的垃圾堆”。 馬克思主義告訴我們,所有資本主義利潤都來自工人無償剩餘勞動創造的剩餘價值,而不是機器。因此,機器人不會為資本家創造利潤。馬克思稱機器、車間建築和設施為不變資本,工資為可變資本。只有活勞動才能為資本創造價值,而機器中包含的死勞動則不能。 機器為機器製造資本家提供了死的剩餘勞動價值,買家已經為此付了錢,因此機器本身不再貢獻新的勞動價值。使用機器人等機器來完全替代勞動力將不是一種有利可圖的商業策略,除非機器是在內部製造的,以便製造商能夠在內部獲取固有的活剩餘價值。這就是為什麼大資本家投入巨額資金,是所有與自動化相關的固定資本的幾倍。 機器確實可以做很多人工做不到的事情,應該被視為解決勞動力短缺或勞動力限制等沉重問題的完美解決方案。但這在世界上最後一個生產資料私有制或資本主義制度中不會普遍實現。例外是,工業機器人比人形機器人更常被採用,因為前者可以減輕人類繁重和重複的勞動條件的負擔,從而節省可變資本或工資的投資。儘管如此,機器人技術不會為資本家創造私人利潤。一般來說,機器人技術和人工智能最適合於工人階級專政下生產資料屬於國家而不是個人的制度。換句話說,資本主義的內在矛盾——生產資料私有制和生產社會化,已經限制了更先進的進步,而沒有實現。 據悉,OpenAI高級團隊將向客戶收取每月22萬美元的C使用費,因為要承擔高昂的法律和財務成本。sebome的壟斷最終將接管製造業,導致生產過剩和失業過剩,引發商業、金融和政治經濟危機。 將sebome也就是塞博梅(基於機器進化的聰明百科全書)重新定義為比通常更現實的AI是合理的。它首先不應該是一台與人類腦力競爭的機器,而應該下降到比AI更低的水平,滿足於模擬無腦生物,如變形蟲。希望隨着機器的進化,基於機器進化的聰明百科全書會變得更聰明,能夠模擬具有原始腦力的無脊椎動物,如扁蟲和水母等。[Mark Wain 2025-03-25] |
|
|
![]() |
![]() |
| 實用資訊 | |
|
|
| 一周點擊熱帖 | 更多>> |
| 一周回復熱帖 |
| 歷史上的今天:回復熱帖 |
| 2024: | 以色列需要儘快實施對拉法的攻擊。力爭 | |
| 2024: | 從人性理解道德心、榮辱觀及中共的文革 | |
| 2023: | 搞情報:“毋忘在莒”泄漏了“蔣介石集 | |
| 2023: | 普京有意將習近平捧上老大位置,讓美國 | |
| 2022: | 給你們上課: 為啥putin早就在中東擊敗 | |
| 2022: | 普京原來豬隊友?(20)戰況 | |
| 2021: | 知青治國,毛禍延續。 | |
| 2021: | 1700萬知青,被毀滅的一代。 | |
| 2020: | ”病毒發源於美國還是中國有啥意義“, | |
| 2020: | 中國作家群體很尷尬——看方方日記有感 | |




