设万维读者为首页 广告服务 技术服务 联系我们 关于万维
简体 繁体 手机版
分类广告
版主:阿飞的剑
万维读者网 > 茗香茶语 > 帖子
Box:心理学是科学吗?
送交者: Box 2016年03月10日16:52:27 于 [茗香茶语] 发送悄悄话
心理学是科学吗?


心理学不是科学只是“类科学”,广义上进,心理学永远不可能成为科学。自然界中的物质现象是最低级的,生命现象比物质现象高级,最高级的是意识/精神现象以及与之相关的心理现象。

物质和生命现象都可以划归科学范畴,一是因为这些现象比较简单,二是因为人类已经找到或者大致掌握了理解物质和生命现象的基本法则。但人类心理现象就比较复杂了。

首先心理现象的低端与人类感官的感受活动相关联,在这一个界面心理学是接近科学的;但心理现象的高端与人类精神活动相关联,而在这一个界面心理学与科学的距离遥远,因为人类的意识和精神活动太复杂,无法规范化地描述和观察,更不要说量化地描述和观察了。

另外人类心理现象有一个重要特点,就是感官活动与精神活动的交互干涉影响,它决定了人类心理现象产生原因的复杂性和不确定性,尤其是对于每一个心理事件的成因是很难确定的。心理学可以做出很好的个案分析,但也仅仅是个案,无法找出必然重复的通则。但心理学可以找出近似的、重复率很高的规律,比如那些主要受感官活动影响而产生的心理现象,所以我说心理学是“类科学”。

马斯洛需求层次论,从不同角度我至少批判过三次以上,有案可查。事实上美国的心理学研究者,也在学术杂志上发表过研究报告,他们的实验数据并不支持马斯洛的理论。

马斯洛的层次论粗看之下似乎是显而易见的道理,但他的逻辑推理,与人类在真实世界的心理活动有相当的距离。首先他的“满足-上冲”的层次动力模式有点书呆子气,经不起推敲,另外就是人类感官与精神的交互干涉,使人类的心理活动变得很复杂,而马斯洛的层次论,恰恰是把人类感官相应的心理活动(需求)与精神相应的心理活动依层隔断、彻底分开的。

与其说人类低层需求满足后才会产生高层需求,还不如说人类在多重需求的固有结构中不断在做选择,前者是递进关系,后者是平行关系,这两种模式的结合,才基本涵盖人类的需求心理活动。

比如外敌入侵战争中的抵抗、逃亡或者投降,是一个国家中所有人共时性的不同的生存状态,有愿意做亡国奴的,有不愿做亡国奴的,有怕死的,有不怕死的。。。这些同时存在的对冲的心理现象,根本无法用马斯洛的需求层次来解释。还有桦树说的不自由毋宁死,这也是让马斯洛爆表的一类心理个案,真假不论。

其实心理学界自己也在争论心理学的科学性,下面这篇文章也蛮有意思。而按照我们的分析,心理学的问题,首先在于它的研究对象难以清晰界说,从巴甫洛夫狗到托尔斯泰脑,是不是都归心理学管?



Critique of landmark study: Psychology might not face replication crisis after all I

A study published last year suggested psychological research was facing a replicability crisis, but a new paper says that work was erroneous.
By Eva Botkin-Kowacki, Staff writer MARCH 3, 2016



Shock waves reverberated through the field of psychology research last year at the suggestion that the field faced a "replicability crisis." But the research that triggered that quake is flawed, a team of psychologists asserted in a comment published Thursday in the journal Science.
The ability to repeat an experiment with the same results is a pillar of productive science. When the study that rocked the field was published in Science in late August, Nature News's Monya Baker wrote, "Don’t trust everything you read in the psychology literature. In fact, two thirds of it should probably be distrusted."
In what's called the Reproducibility Project, a large, international team of scientists had repeated 100 published experiments to see if they could get the same results. Only about 40 percent of the replicated experiments yielded the same results.
Recommended:Are you scientifically literate? Take our quiz

But now a different team of researchers is saying that there's simply no evidence of a replicability crisis in that study.

The replication paper "provides not a shred of evidence for a replication crisis," Daniel Gilbert, the first author of the new article in Science commenting on the paper from August, tells The Christian Science Monitor in a phone interview.
The initial study, conducted by the Open Science Collaboration, also openly shared all the resulting data sets. So Dr. Gilbert, a psychology professor at Harvard University, and three of his colleagues pored over that information in a quest to see if it held up.
And the reviewing team, none of whom had papers tested by the original study, found a few crucial errors that could have led to such dismal results. 
Their gripes start with the way studies were selected to be replicated. As Gilbert explains, the 100 studies replicated were from just two disciplines of psychology, social and cognitive psychology, and were not randomly sampled. Instead, the team selected studies published in three prominent psychology journals and the studies had to meet a certain list of criteria, including how complex the methods were.
"Just from the start, in my opinion," Gilbert says, "They never had a chance of estimating the reproducibility of psychology because they do not have the sample of studies that represents psychology." But, he says, that error could be dismissed, as information could still arise about more focused aspects of the field.
But when it came down to replicating the studies, other errors were made. "You might naïvely think that the word replication, since it contains the word replica, means that these studies were done in exactly the same way as the original studies," Gilbert says. In fact, he points out, some of the studies were conducted using different methods or different sample populations. 
"It doesn't stop there," Gilbert says. It turns out that the researchers made a mathematical error when calculating how many of the studies fail to replicate simply based on chance. Based on their erroneous calculations, the number of studies that failed to replicate far outnumbered those expected to fail by chance. But when that calculation was corrected, says Gilbert, their results could actually be explained by chance alone. 
"Any one of [these mistakes] would cast grave doubt on this article," Gilbert says. "Together, in my view, they utterly eviscerate the conclusion that psychology doesn't replicate."
The journal Science isn't just leaving it at that though. Published alongside Gilbert and his team's critique of the original paper is a reply from 44 members of the replication team.
Brian Nosek, executive director of the Center for Open Science who led the original study, says that his team agrees with Gilbert's team in some ways. 
Dr. Nosek tells the Monitor in a phone interview that his team wasn't trying to conclude why the original studies' results only matched the replicated results about 40 percent of the time. It could be that the original studies were wrong or the replications were wrong, either by chance or by inconsistent methods, he says.
Or perhaps there were conditions necessary to get the original result that the scientists didn't consider but could in fact further inform the results, he says.
"We don't have sufficient evidence to draw a conclusion of what combination of these contributed to the results that we observed," he says. 
It could simply come down to how science works. 
"No one study is definitive for anything, neither the replication nor the original," Nosek says. "Anyone that draws a definitive conclusion based on a single study is overstepping what science can provide," and that goes for the Reproducibility Project too. Each study was repeated only once, he says.
"What we offered is that initial piece of evidence that hopefully would, and has, gotten people's theoretical juices flowing, to spur that debate," Nosek says. And spur it has. 
Gilbert agrees that one published scientific paper should not be taken as definitive. "Journals aren't gospel. Journals aren't the place where truth goes to be enshrined forever," he says. "Journals are organs of communication. They're the way that scientists tell each other, hey guys, I did an experiment. Look what I found."
When reproduction follows, that's "how science accumulates knowledge," Nosek says. "A scientific claim becomes credible by the ability to independently reproduce it."


http://www.csmonitor.com/Science/2016/0303/Critique-of-landmark-study-Psychology-may-not-face-replicability-crisis-after-all


0%(0)
0%(0)
标 题 (必选项):
内 容 (选填项):
实用资讯
回国机票$360起 | 商务舱省$200 | 全球最佳航空公司出炉:海航获五星
海外华人福利!在线看陈建斌《三叉戟》热血归回 豪情筑梦 高清免费看 无地区限制
一周点击热帖 更多>>
一周回复热帖
历史上的今天:回复热帖
2015: 重大考古发现
2015: 柴玲和远牧师之间的性纠纷真相应该是这
2014: [纠正随便同志这两天的几个错误认识]
2014: 到现在还没找到飞机,就已经说明美国参
2013: 有想看有点颜色的美女照片的男同学吗?
2013: 混混好久没有交易么?
2012: 看万维新闻首页说86年出生的就算剩女了
2012: 艺术家的事咱们不懂
2011: 读了大学同学的一篇短文,挺有味道,转
2011: is the bull market over?