設萬維讀者為首頁 廣告服務 技術服務 聯繫我們 關於萬維
簡體 繁體 手機版
分類廣告
版主:阿飛的劍
萬維讀者網 > 茗香茶語 > 帖子
Box:心理學是科學嗎?
送交者: Box 2016年03月10日16:52:27 於 [茗香茶語] 發送悄悄話
心理學是科學嗎?


心理學不是科學只是“類科學”,廣義上進,心理學永遠不可能成為科學。自然界中的物質現象是最低級的,生命現象比物質現象高級,最高級的是意識/精神現象以及與之相關的心理現象。

物質和生命現象都可以劃歸科學範疇,一是因為這些現象比較簡單,二是因為人類已經找到或者大致掌握了理解物質和生命現象的基本法則。但人類心理現象就比較複雜了。

首先心理現象的低端與人類感官的感受活動相關聯,在這一個界面心理學是接近科學的;但心理現象的高端與人類精神活動相關聯,而在這一個界面心理學與科學的距離遙遠,因為人類的意識和精神活動太複雜,無法規範化地描述和觀察,更不要說量化地描述和觀察了。

另外人類心理現象有一個重要特點,就是感官活動與精神活動的交互干涉影響,它決定了人類心理現象產生原因的複雜性和不確定性,尤其是對於每一個心理事件的成因是很難確定的。心理學可以做出很好的個案分析,但也僅僅是個案,無法找出必然重複的通則。但心理學可以找出近似的、重複率很高的規律,比如那些主要受感官活動影響而產生的心理現象,所以我說心理學是“類科學”。

馬斯洛需求層次論,從不同角度我至少批判過三次以上,有案可查。事實上美國的心理學研究者,也在學術雜誌上發表過研究報告,他們的實驗數據並不支持馬斯洛的理論。

馬斯洛的層次論粗看之下似乎是顯而易見的道理,但他的邏輯推理,與人類在真實世界的心理活動有相當的距離。首先他的“滿足-上沖”的層次動力模式有點書呆子氣,經不起推敲,另外就是人類感官與精神的交互干涉,使人類的心理活動變得很複雜,而馬斯洛的層次論,恰恰是把人類感官相應的心理活動(需求)與精神相應的心理活動依層隔斷、徹底分開的。

與其說人類低層需求滿足後才會產生高層需求,還不如說人類在多重需求的固有結構中不斷在做選擇,前者是遞進關係,後者是平行關係,這兩種模式的結合,才基本涵蓋人類的需求心理活動。

比如外敵入侵戰爭中的抵抗、逃亡或者投降,是一個國家中所有人共時性的不同的生存狀態,有願意做亡國奴的,有不願做亡國奴的,有怕死的,有不怕死的。。。這些同時存在的對沖的心理現象,根本無法用馬斯洛的需求層次來解釋。還有樺樹說的不自由毋寧死,這也是讓馬斯洛爆表的一類心理個案,真假不論。

其實心理學界自己也在爭論心理學的科學性,下面這篇文章也蠻有意思。而按照我們的分析,心理學的問題,首先在於它的研究對象難以清晰界說,從巴甫洛夫狗到托爾斯泰腦,是不是都歸心理學管?



Critique of landmark study: Psychology might not face replication crisis after all I

A study published last year suggested psychological research was facing a replicability crisis, but a new paper says that work was erroneous.
By Eva Botkin-Kowacki, Staff writer MARCH 3, 2016



Shock waves reverberated through the field of psychology research last year at the suggestion that the field faced a "replicability crisis." But the research that triggered that quake is flawed, a team of psychologists asserted in a comment published Thursday in the journal Science.
The ability to repeat an experiment with the same results is a pillar of productive science. When the study that rocked the field was published in Science in late August, Nature News's Monya Baker wrote, "Don’t trust everything you read in the psychology literature. In fact, two thirds of it should probably be distrusted."
In what's called the Reproducibility Project, a large, international team of scientists had repeated 100 published experiments to see if they could get the same results. Only about 40 percent of the replicated experiments yielded the same results.
Recommended:Are you scientifically literate? Take our quiz

But now a different team of researchers is saying that there's simply no evidence of a replicability crisis in that study.

The replication paper "provides not a shred of evidence for a replication crisis," Daniel Gilbert, the first author of the new article in Science commenting on the paper from August, tells The Christian Science Monitor in a phone interview.
The initial study, conducted by the Open Science Collaboration, also openly shared all the resulting data sets. So Dr. Gilbert, a psychology professor at Harvard University, and three of his colleagues pored over that information in a quest to see if it held up.
And the reviewing team, none of whom had papers tested by the original study, found a few crucial errors that could have led to such dismal results. 
Their gripes start with the way studies were selected to be replicated. As Gilbert explains, the 100 studies replicated were from just two disciplines of psychology, social and cognitive psychology, and were not randomly sampled. Instead, the team selected studies published in three prominent psychology journals and the studies had to meet a certain list of criteria, including how complex the methods were.
"Just from the start, in my opinion," Gilbert says, "They never had a chance of estimating the reproducibility of psychology because they do not have the sample of studies that represents psychology." But, he says, that error could be dismissed, as information could still arise about more focused aspects of the field.
But when it came down to replicating the studies, other errors were made. "You might naïvely think that the word replication, since it contains the word replica, means that these studies were done in exactly the same way as the original studies," Gilbert says. In fact, he points out, some of the studies were conducted using different methods or different sample populations. 
"It doesn't stop there," Gilbert says. It turns out that the researchers made a mathematical error when calculating how many of the studies fail to replicate simply based on chance. Based on their erroneous calculations, the number of studies that failed to replicate far outnumbered those expected to fail by chance. But when that calculation was corrected, says Gilbert, their results could actually be explained by chance alone. 
"Any one of [these mistakes] would cast grave doubt on this article," Gilbert says. "Together, in my view, they utterly eviscerate the conclusion that psychology doesn't replicate."
The journal Science isn't just leaving it at that though. Published alongside Gilbert and his team's critique of the original paper is a reply from 44 members of the replication team.
Brian Nosek, executive director of the Center for Open Science who led the original study, says that his team agrees with Gilbert's team in some ways. 
Dr. Nosek tells the Monitor in a phone interview that his team wasn't trying to conclude why the original studies' results only matched the replicated results about 40 percent of the time. It could be that the original studies were wrong or the replications were wrong, either by chance or by inconsistent methods, he says.
Or perhaps there were conditions necessary to get the original result that the scientists didn't consider but could in fact further inform the results, he says.
"We don't have sufficient evidence to draw a conclusion of what combination of these contributed to the results that we observed," he says. 
It could simply come down to how science works. 
"No one study is definitive for anything, neither the replication nor the original," Nosek says. "Anyone that draws a definitive conclusion based on a single study is overstepping what science can provide," and that goes for the Reproducibility Project too. Each study was repeated only once, he says.
"What we offered is that initial piece of evidence that hopefully would, and has, gotten people's theoretical juices flowing, to spur that debate," Nosek says. And spur it has. 
Gilbert agrees that one published scientific paper should not be taken as definitive. "Journals aren't gospel. Journals aren't the place where truth goes to be enshrined forever," he says. "Journals are organs of communication. They're the way that scientists tell each other, hey guys, I did an experiment. Look what I found."
When reproduction follows, that's "how science accumulates knowledge," Nosek says. "A scientific claim becomes credible by the ability to independently reproduce it."


http://www.csmonitor.com/Science/2016/0303/Critique-of-landmark-study-Psychology-may-not-face-replicability-crisis-after-all


0%(0)
0%(0)
標 題 (必選項):
內 容 (選填項):
實用資訊
回國機票$360起 | 商務艙省$200 | 全球最佳航空公司出爐:海航獲五星
海外華人福利!在線看陳建斌《三叉戟》熱血歸回 豪情築夢 高清免費看 無地區限制
一周點擊熱帖 更多>>
一周回復熱帖
歷史上的今天:回復熱帖
2015: 重大考古發現
2015: 柴玲和遠牧師之間的性糾紛真相應該是這
2014: [糾正隨便同志這兩天的幾個錯誤認識]
2014: 到現在還沒找到飛機,就已經說明美國參
2013: 有想看有點顏色的美女照片的男同學嗎?
2013: 混混好久沒有交易麼?
2012: 看萬維新聞首頁說86年出生的就算剩女了
2012: 藝術家的事咱們不懂
2011: 讀了大學同學的一篇短文,挺有味道,轉
2011: is the bull market over?