

摘" 要" 信任是人機(jī)成功合作的基礎(chǔ)。但個(gè)體在人機(jī)交互中并不總是持有恰當(dāng)?shù)男湃嗡剑?也可能會(huì)出現(xiàn)信任偏差:過度信任和信任不足。信任偏差會(huì)妨礙人機(jī)合作, 因此需要對(duì)信任進(jìn)行校準(zhǔn)。信任校準(zhǔn)常常通過信任抑制與信任提升兩條途徑來實(shí)現(xiàn)。信任抑制聚焦于如何降低個(gè)體對(duì)機(jī)器人過高的信任水平, 信任提升則側(cè)重于如何提高個(gè)體對(duì)機(jī)器人較低的信任水平。未來研究可進(jìn)一步優(yōu)化校準(zhǔn)效果評(píng)估的測(cè)量方法、揭示信任校準(zhǔn)過程中以及信任校準(zhǔn)后個(gè)體的認(rèn)知變化機(jī)制、探索信任校準(zhǔn)的邊界條件以及個(gè)性化和精細(xì)化的信任校準(zhǔn)策略, 以期助推人機(jī)協(xié)作。
關(guān)鍵詞" 信任校準(zhǔn), 信任偏差, 信任抑制, 信任提升, 人機(jī)交互
分類號(hào)" B849
1" 引言
信任廣泛存在于任何關(guān)系的建立與發(fā)展之中, 如親密關(guān)系(Rempel et al., 1985)、消費(fèi)關(guān)系(Kwon et al., 2021)、組織關(guān)系(Meng amp; Berger, 2019)、醫(yī)患關(guān)系(Petrocchi et al., 2019)等。它不僅是人際交往的重要因素, 也是社會(huì)發(fā)展的潤(rùn)滑劑(樂國(guó)安, 韓振華, 2009)。隨著機(jī)器人逐漸走進(jìn)人們的生活, 研究者們發(fā)現(xiàn)信任也存在于人機(jī)交互之中(Hoff amp; Bashir, 2015; Khavas, 2021)。本文結(jié)合前人的研究(高在峰 等, 2021; Lee amp; See, 2004; Mayer et al., 1995), 將人機(jī)信任定義為:個(gè)體在情境不確定或具有脆弱性時(shí), 對(duì)機(jī)器人能幫助己方實(shí)現(xiàn)目標(biāo)或不會(huì)利用己方弱點(diǎn)所持有的信心和心理預(yù)期。信任對(duì)于人機(jī)交互至關(guān)重要, 它不僅是人類使用與接受算法的前提(Sanders et al., 2019), 也是人機(jī)合作的基礎(chǔ)(Esterwood amp; Robert, 2021)。
本文主要關(guān)注人與智能機(jī)器人、算法、人工智能之間的交互, 且以人與智能機(jī)器人之間的交互為主。以智能機(jī)器人為例, 人機(jī)交互中, 個(gè)體對(duì)智能機(jī)器人的信任水平可能過高, 也可能過低, 前者為過度信任(Over-trust), 后者為信任不足(Under-trust)。過度信任會(huì)導(dǎo)致人們對(duì)智能機(jī)器人不恰當(dāng)?shù)囊蕾嚭驼`用(Misuse), 信任不足則會(huì)導(dǎo)致棄用(Disuse)。信任不足和過度信任都會(huì)破壞人機(jī)交互系統(tǒng)的價(jià)值(Hancock et al., 2011), 因此個(gè)體需要在感知可靠性和實(shí)際可靠性之間進(jìn)行準(zhǔn)確校準(zhǔn)(Calibration)以保持恰當(dāng)?shù)男湃嗡剑∕adhavan amp; Wiegmann, 2007)。當(dāng)個(gè)體擁有校準(zhǔn)良好的信任時(shí), 他/她就知道何時(shí)應(yīng)該信任機(jī)器人, 何時(shí)不應(yīng)該信任機(jī)器人(Ali et al., 2022)。人機(jī)信任往往通過兩條途徑進(jìn)行校準(zhǔn):信任抑制(Trust dampening)與信任提升。信任抑制聚焦于如何降低個(gè)體不切實(shí)際的較高信任水平, 信任提升側(cè)重于如何提升個(gè)體過低的信任水平。需要注意的是, 本文將提升個(gè)體過低信任水平的途徑命名為“信任提升”, 而非以往研究中經(jīng)常使用的“信任修復(fù)” (Trust repair)。我們認(rèn)為“信任修復(fù)”強(qiáng)調(diào)的是怎樣改善個(gè)體在機(jī)器人出現(xiàn)信任違背(Trust violation)后的過低信任水平, 它并沒有把如何提升個(gè)體初始信任水平過低的情況包括在內(nèi)。相比之下, “信任提升”能更好地囊括和反映該校準(zhǔn)途徑的內(nèi)容。國(guó)外研究者們針對(duì)人機(jī)信任校準(zhǔn)開展了大量研究(Alarcon et al., 2020; de Visser et al., 2020; Ososky et al., 2013), 考察了人機(jī)信任偏差的成因, 并提出了相應(yīng)的信任校準(zhǔn)策略, 但這些研究還比較分散, 缺乏對(duì)該領(lǐng)域?qū)嵶C研究的系統(tǒng)整合與梳理; 另外, 目前針對(duì)人機(jī)信任校準(zhǔn)策略的有效性尚存在爭(zhēng)議, 且以往研究大多只關(guān)注信任偏差的其中一方面(比如信任不足或過度信任), 忽略了從整體視角去整合信任偏差、信任校準(zhǔn)有關(guān)研究的必要性與重要性。基于此, 本文從人機(jī)交互中可能出現(xiàn)的信任偏差成因入手, 梳理人機(jī)交互過程中機(jī)器人、個(gè)體本身、情境是怎樣影響信任偏差的, 以及如何通過信任提升與信任抑制兩條途徑校準(zhǔn)人機(jī)信任、糾正信任偏差(見圖1); 本文也試圖厘清人機(jī)信任校準(zhǔn)策略的邊界條件, 并在此基礎(chǔ)之上提出未來研究展望。
2" 人機(jī)信任偏差
在本文中, 我們將個(gè)體在人機(jī)交互中表現(xiàn)出來的過度信任和信任不足統(tǒng)稱為人機(jī)信任偏差, 即個(gè)體由于對(duì)機(jī)器人能力的錯(cuò)誤估計(jì)導(dǎo)致信任偏離校準(zhǔn)值。人機(jī)信任偏差會(huì)導(dǎo)致個(gè)體信任比人類更不可靠的算法, 或不信任比人類更可靠的算法(Dzindolet et al., 2003)。
2.1" 人機(jī)信任偏差的危害
過度信任往往出現(xiàn)在個(gè)體認(rèn)為機(jī)器人具備人類沒有的功能, 或個(gè)體期望機(jī)器人能幫助他們降低風(fēng)險(xiǎn)的情境下(Borenstein et al., 2018; Parasuraman amp; Riley, 1997)。過度信任會(huì)直接導(dǎo)致個(gè)體高估機(jī)器人的能力, 亦常常伴隨有決策錯(cuò)誤的風(fēng)險(xiǎn), 例如盲目地接受機(jī)器人智能體(Agent)提出的所有決策方案卻不加考慮該決策是否合理(Khavas, 2021; Khavas et al., 2020); 過度信任有時(shí)甚至?xí)?duì)個(gè)體的生命造成威脅。Borenstein等人(2018)發(fā)現(xiàn)盡管目前最先進(jìn)的機(jī)器人外骨骼只能在低速慢走等有限條件下為運(yùn)動(dòng)殘疾兒童提供一定的輔助功能, 但是仍有很多運(yùn)動(dòng)殘疾兒童的家長(zhǎng)認(rèn)為當(dāng)他們的孩子進(jìn)行某些風(fēng)險(xiǎn)運(yùn)動(dòng)(例如攀爬)時(shí), 外骨骼也可以保護(hù)他們的孩子不受傷害。這種過度信任機(jī)器(人)帶來的危險(xiǎn)同樣出現(xiàn)在交通駕駛中:那些對(duì)于自動(dòng)駕駛汽車高度信任的司機(jī)更容易在駕駛過程中出現(xiàn)打瞌睡的情況(Kundinger et al., 2019), 從而增加出現(xiàn)交通事故的可能性。
與過度信任相比, 信任不足的危害較小, 但是對(duì)于算法的信任不足往往會(huì)使個(gè)體傾向于低估算法能力(Parasuraman amp; Riley, 1997), 不能很好地利用算法, 也無法享受使用算法所帶來的好處(Ali et al., 2022), 最終在人機(jī)協(xié)作情境中惡化整體績(jī)效, 降低人機(jī)團(tuán)隊(duì)效率(Okamura amp; Yamada, 2020)。
2.2" 人機(jī)信任偏差之因
2.2.1" 與機(jī)器人有關(guān)的因素
可靠性。機(jī)器人本身與性能相關(guān)的因素, 在人機(jī)交互之初對(duì)人機(jī)信任的影響很大(Robinette et al., 2017b)。以可靠性(Reliability)為例, 它指代機(jī)器人性能具有前后一致性(Hancock et al., 2021)。一個(gè)可靠的機(jī)器人, 應(yīng)該具備可預(yù)測(cè)、性能穩(wěn)定等特點(diǎn)。機(jī)器人的可靠性既可能誘發(fā)過度信任, 也可能誘發(fā)信任不足(Shi et al., 2020)。具體來說, 如果人們察覺到機(jī)器人的能力是可靠的、穩(wěn)定不變的、可預(yù)測(cè)的, 就極有可能放松對(duì)機(jī)器人的實(shí)時(shí)監(jiān)測(cè), 表現(xiàn)出對(duì)機(jī)器人的過度信任; 反之則容易出現(xiàn)信任不足。正如前文所述, 機(jī)器人性能與可靠性密切相關(guān), 而人機(jī)交互之中機(jī)器人錯(cuò)誤的出現(xiàn)則會(huì)誘發(fā)信任違背, 導(dǎo)致信任方(個(gè)體)對(duì)受信方(機(jī)器人)的信任意向或信任信念降低(嚴(yán)瑜, 吳霞, 2016; Kim et al., 2009)。錯(cuò)誤誘發(fā)信任違背的原因主要有二:一是錯(cuò)誤會(huì)讓人們懷疑算法的可靠性較低, 進(jìn)而造成信任水平下降(Alarcon et al., 2020; Correia et al., 2018; Lee amp; Moray, 1992); 二是人們往往對(duì)于算法錯(cuò)誤這類信息的敏感性較高, 無法容忍算法出錯(cuò), 一旦算法出錯(cuò)就會(huì)直接棄用(Dietvorst et al., 2015)。另外, 錯(cuò)誤發(fā)生的頻率、嚴(yán)重程度、數(shù)量也會(huì)影響信任水平的變化(Rossi et al., 2017), 錯(cuò)誤發(fā)生的越頻繁、越嚴(yán)重、數(shù)量越多, 就會(huì)導(dǎo)致信任水平下降速度越快、幅度越大。除去機(jī)器人的明顯錯(cuò)誤以外, 一些意料之外的、非預(yù)期行為同樣也會(huì)導(dǎo)致信任違背。Lyons等人(2023)發(fā)現(xiàn)當(dāng)機(jī)器人行動(dòng)路線偏離參與者原本設(shè)定的路線之后, 參與者對(duì)機(jī)器人的信任感知和可信度感知均下降。
值得一提的是, 針對(duì)錯(cuò)誤是否會(huì)導(dǎo)致個(gè)體對(duì)機(jī)器人的信任下降, 有研究者提出了相反的觀點(diǎn)。例如Sarkar等人(2017)指出, 錯(cuò)誤不會(huì)影響參與者對(duì)機(jī)器人可信度的感知和后續(xù)的人機(jī)協(xié)作任務(wù)績(jī)效, 但他們同時(shí)也承認(rèn)了在該實(shí)驗(yàn)中任務(wù)性質(zhì)(任務(wù)困難且要求高)、犯錯(cuò)類型(僅涉及認(rèn)知錯(cuò)誤:給參與者錯(cuò)誤指導(dǎo), 但不會(huì)妨礙其完成任務(wù))等因素的影響。有趣的是, 機(jī)器人的錯(cuò)誤有時(shí)甚至被感知為可愛(Ragni et al., 2016)。當(dāng)機(jī)器人犯錯(cuò)之后, 人們會(huì)覺得它更具有人類相似性、更討人喜歡(Mirnig et al., 2017; Salem et al., 2013), 一個(gè)完美的機(jī)器人反倒會(huì)看起來更不自然(Biswas amp; Murray, 2015)。這也印證了在人機(jī)互動(dòng)中同樣存在“出丑效應(yīng)(Pratfall effect)”。在“剪刀石頭布”游戲中, 當(dāng)機(jī)器人出現(xiàn)口頭作弊(明明輸了卻聲稱自己贏了)或行為作弊(在看到對(duì)方出拳之后改變自己的原本答案)時(shí), 相比于無作弊行為的控制組機(jī)器人, 人們?cè)谧鞅讞l件下與機(jī)器人的社會(huì)互動(dòng)明顯增加, 也更容易被機(jī)器人的作弊行為所逗樂, 盡管他們主觀上認(rèn)為作弊是不公平的(Short et al., 2010)。
化身。人機(jī)交互中, 化身(Embodiment)對(duì)信任也有一定影響, 化身是指機(jī)器人的形態(tài)是實(shí)體或虛擬(van Maris et al., 2017), 主要?jiǎng)澐譃槲锢砘恚≒hysical embodiment)和虛擬化身(Virtual embodiment)。物理化身指機(jī)器人在三維空間中具備一個(gè)有形的、物理的身體, 能自由移動(dòng)或操縱環(huán)境(Haring et al., 2021); 而虛擬化身(例如虛擬機(jī)器人)僅呈現(xiàn)在電子屏幕上, 雖然擁有虛擬身體, 但活動(dòng)范圍受限制。相較于虛擬呈現(xiàn), 人們更喜歡與物理呈現(xiàn)的機(jī)器人進(jìn)行交互。物理化身會(huì)通過社會(huì)臨場(chǎng)感(Social presence)影響信任, 喚起個(gè)體對(duì)機(jī)器人的積極態(tài)度, 將機(jī)器人視為社會(huì)行動(dòng)者(Social actor), 進(jìn)而對(duì)其作出社會(huì)化反應(yīng)(Jung amp; Lee, 2004)。尤其當(dāng)機(jī)器人的位置非常顯眼, 會(huì)無形增大個(gè)體依賴于它們的概率(Robinette et al., 2017a)。Bainbridge等人(2011)發(fā)現(xiàn), 相對(duì)于遠(yuǎn)程呈現(xiàn)機(jī)器人, 當(dāng)機(jī)器人是以實(shí)體形態(tài)與參與者交互時(shí), 參與者對(duì)機(jī)器人不尋常命令的服從率會(huì)更高。在實(shí)體機(jī)器人形態(tài)條件下, 盡管參與者都猶豫并感到困惑, 但22名參與者中有12名還是聽從機(jī)器人的指示將書本扔進(jìn)了垃圾桶; 相比之下, 遠(yuǎn)程呈現(xiàn)機(jī)器人條件下僅有2或3名參與者服從了這個(gè)命令。需要說明的是, 在該研究中服從機(jī)器人的命令被視為參與者信任機(jī)器人的表現(xiàn)。
2.2.2" 與個(gè)體有關(guān)的因素
動(dòng)機(jī)。過度信任可能出于人們社會(huì)惰性(Social loafing)的動(dòng)機(jī), 即相較于自己?jiǎn)为?dú)工作, 人機(jī)協(xié)作中個(gè)體付出的努力更少(Onnasch amp; Panayotidis, 2020; Parasuraman amp; Manzey, 2010)。當(dāng)個(gè)體與機(jī)器人一起工作時(shí), 責(zé)任可能在個(gè)體和機(jī)器人之間分散, 因此人機(jī)協(xié)作情境中更有可能出現(xiàn)“搭便車”效應(yīng)(Dzindolet et al., 2002)。Cymek等人(2023)的研究中發(fā)現(xiàn), 盡管單獨(dú)工作組和與機(jī)器人協(xié)作組的參與者都自我報(bào)告在任務(wù)中投入了大量精力, 但對(duì)比工作績(jī)效發(fā)現(xiàn)單獨(dú)工作組的參與者明顯比機(jī)器人協(xié)作組的參與者任務(wù)績(jī)效更高。Cymek等人推測(cè)在實(shí)驗(yàn)階段前四分之三的時(shí)間中, 機(jī)器人協(xié)作組發(fā)現(xiàn)機(jī)器人的可靠性很高, 因此在任務(wù)最后階段放松警惕, 未能及時(shí)察覺出機(jī)器人犯錯(cuò)。
自我信心。當(dāng)個(gè)體的自我信心超過對(duì)自動(dòng)化的信任時(shí), 個(gè)體更可能在人機(jī)交互中依靠自己; 當(dāng)個(gè)體對(duì)自身信心不足, 則可能轉(zhuǎn)向依賴于自動(dòng)化(Lee amp; Moray, 1994)。在后一種情況下極易誘發(fā)過度信任, 不僅僅因?yàn)槿藗冋J(rèn)為算法較為權(quán)威, 而人類的權(quán)力更小(Shank et al., 2021)、更弱勢(shì), 也因?yàn)榕c人類決策相比, 算法決策更為可靠、更準(zhǔn)確(Mosier amp; Skitka, 1996)。Dijkstra (1999)的研究中, 算法專家系統(tǒng)無論法律案件的具體情況如何總是認(rèn)定罪犯有罪, 參與者最后需要評(píng)估是否接受該系統(tǒng)的意見。研究結(jié)果發(fā)現(xiàn), 盡管參與者有更好的選擇(例如聽從人類律師辯護(hù)詞的建議), 但他們最終更樂于聽從算法專家系統(tǒng)的建議, 即使建議是錯(cuò)誤的。那些樂于聽從算法專家系統(tǒng)的參與者對(duì)算法專家系統(tǒng)的評(píng)價(jià)更積極, 權(quán)威依從得分更高。Xu等人(2018)也發(fā)現(xiàn)相對(duì)于人類治療師而言, 人們更加信任機(jī)器人治療師, 并且伴隨有過度信任的風(fēng)險(xiǎn)。
算法態(tài)度。算法態(tài)度是個(gè)體對(duì)算法的認(rèn)知、情感和行為傾向的總和。算法欣賞(Algorithm appreciation)與算法厭惡(Algorithm aversion)就是兩種典型的算法態(tài)度。算法欣賞會(huì)促使個(gè)體積極趨近算法決策, 進(jìn)而表現(xiàn)出對(duì)于算法的過度信任。Logg等人(2019)發(fā)現(xiàn), 即使無法判斷算法或人類決策的正確性, 當(dāng)參與者認(rèn)為決策是來自算法而不是人類時(shí), 即便兩者給出的決策內(nèi)容實(shí)質(zhì)是一樣的, 他們也更容易依賴算法, 且這種算法欣賞效應(yīng)具有跨主客觀任務(wù)的一致性。個(gè)體對(duì)于機(jī)器人過高的信任也隱含了對(duì)機(jī)器人的性能期望(Lyons et al., 2020; Shin et al., 2020)。算法欣賞可能與高期望有關(guān)。期望越高, 初始信任水平就會(huì)越高。一方面, 高期望來自于對(duì)機(jī)器人外表的認(rèn)知, 例如機(jī)器人的擬人化(Anthropomorphism)會(huì)增強(qiáng)個(gè)體信任(van Pinxteren et al., 2019); 另一方面, 高期望可能是缺失真實(shí)互動(dòng)體驗(yàn)的結(jié)果。例如在一項(xiàng)研究中, 當(dāng)機(jī)器人在完成任務(wù)的同時(shí)報(bào)告“Q值” (一串?dāng)?shù)字代碼), 結(jié)果不管是否具備AI知識(shí)經(jīng)驗(yàn)的參與者都認(rèn)為AI更加可靠, 認(rèn)為越難以理解的AI越聰明(Ehsan et al., 2021)。
低信任水平也與算法厭惡息息相關(guān)。Chiarella等人(2022)發(fā)現(xiàn), 兩幅由同一位畫家用不同色彩的顏料創(chuàng)作的繪畫作品, 僅僅通過操縱畫作著者是人類還是AI, 就會(huì)影響人們對(duì)于畫作的審美評(píng)分, 具體表現(xiàn)為人們對(duì)“AI”作品的評(píng)分更低。算法厭惡可能是由于目前大眾對(duì)機(jī)器人的實(shí)際接觸較少, 加之某些網(wǎng)絡(luò)媒體對(duì)AI威脅的恐嚇性報(bào)道和大肆宣揚(yáng) (例如AI會(huì)統(tǒng)治世界、未來將發(fā)生人機(jī)大戰(zhàn)等) (Demir et al., 2019), 無形中加劇個(gè)體對(duì)機(jī)器人的負(fù)面態(tài)度, 進(jìn)而造成個(gè)體對(duì)機(jī)器人的消極印象。算法厭惡也可能是消極信任轉(zhuǎn)移(Trust transfer)的后果(Okuoka et al., 2022), 如果個(gè)體之前對(duì)于計(jì)算機(jī)、手機(jī)等機(jī)械類產(chǎn)品有較糟糕的使用體驗(yàn)與經(jīng)歷, 這種消極態(tài)度也會(huì)遷移到與算法有關(guān)的新興產(chǎn)品上(Lee amp; Kolodge, 2020)。
心理模型。心理模型(Mental models)是經(jīng)過組織的知識(shí)結(jié)構(gòu), 亦是對(duì)工作環(huán)境的認(rèn)知表征; 人們使用心理模型來預(yù)測(cè)和解釋他們周圍世界的行為, 并建構(gòu)預(yù)期(楊正宇 等, 2003)。在人機(jī)交互的研究中, 心理模型可以幫助個(gè)體更好地通過線索推斷機(jī)器人的內(nèi)在狀態(tài)并預(yù)測(cè)它的能力(Lee et al., 2005)。但是, 由于心理模型是建立在個(gè)人經(jīng)驗(yàn)的基礎(chǔ)之上的, 如果有新的經(jīng)驗(yàn)發(fā)生, 心理模型也會(huì)隨之改變, 因此個(gè)體之間的心理模型很可能各不相同(Müller et al., 2023)。人機(jī)信任校準(zhǔn)的前提是人們能正確、全面、客觀地看待機(jī)器人的優(yōu)勢(shì)與劣勢(shì), 換句話說, 個(gè)體需要具備恰當(dāng)?shù)男睦砟P鸵员碚骱屠斫鈾C(jī)器人的能力。舉例來講, 人機(jī)交互中, 只有當(dāng)機(jī)器人發(fā)出的信號(hào)被人類用戶恰當(dāng)解釋時(shí), 人類才可以預(yù)測(cè)和理解機(jī)器人的行為(Breazeal, 2003)。因此, 如果個(gè)體擁有對(duì)機(jī)器人恰當(dāng)?shù)男睦砟P停?就能較好地校準(zhǔn)信任, 反之則會(huì)由于對(duì)機(jī)器人的能力估計(jì)錯(cuò)誤而導(dǎo)致信任偏差(Ososky et al., 2013)。
2.2.3" 與情境有關(guān)的因素
風(fēng)險(xiǎn)與時(shí)間壓力。高風(fēng)險(xiǎn)條件或許會(huì)增大個(gè)體信任機(jī)器人的概率。Robinette等人(2016)研究中, 參與者在機(jī)器人的帶領(lǐng)下前往會(huì)議室。機(jī)器人的帶領(lǐng)路線有兩種類型, 一種是迂回的低效率路線, 一種是不迂回的高效率路線。采用迂回路線帶領(lǐng)參與者前往會(huì)議室的機(jī)器人被視為低能力的機(jī)器人。當(dāng)參與者到達(dá)會(huì)議室后聽到警報(bào), 需在一分鐘之內(nèi)立刻逃離這棟大樓。所有的參與者都跟隨了機(jī)器人, 甚至忽略了之前機(jī)器人的低能力。時(shí)間壓力也會(huì)加劇過度信任, 如果參與者感知到時(shí)間緊迫, 更有可能向機(jī)器人尋求幫助, 盡管之前已經(jīng)觀測(cè)到它出現(xiàn)過錯(cuò)誤(Xu amp; Howard, 2018)。
決策領(lǐng)域特點(diǎn)。有研究者認(rèn)為, 與人類相比, 人們對(duì)于算法是厭惡或欣賞的關(guān)鍵決定因素是該智能體背后的專業(yè)能力(Hou amp; Jung, 2021)。如果個(gè)體認(rèn)為算法在某方面的專業(yè)能力不及人類, 則有可能出現(xiàn)算法厭惡。譬如在醫(yī)療診斷中, 人們?cè)诖蠖鄶?shù)情況下會(huì)認(rèn)為人類決策優(yōu)于算法決策, 一方面是因?yàn)槿祟悰Q策會(huì)讓個(gè)體感到更有尊嚴(yán), 相比之下算法決策會(huì)給人們帶來非人化(Dehumanization)體驗(yàn)(Formosa et al., 2022)。而另一方面, 當(dāng)涉及到需要進(jìn)行自我披露時(shí), 與機(jī)器人相比, 人們也更愿意去信任人類, 對(duì)人類的披露欲也更強(qiáng)(Barfield, 2021)。與此同時(shí), 決策領(lǐng)域的確定性程度也會(huì)影響人們是否使用算法。隨著決策領(lǐng)域中不確定性的增加, 人類和算法之間的績(jī)效差異逐漸縮小, 相對(duì)于“不完美”的人類犯錯(cuò), 人們更不能接受“完美”的算法犯錯(cuò)。因此, 對(duì)錯(cuò)誤敏感性降低的個(gè)體將傾向于依賴風(fēng)險(xiǎn)更高、誤差更大的人類判斷(Dietvorst amp; Bharti, 2020)。這種對(duì)于算法的偏見或是人們感知到相較于人類決策, 算法決策更加不公平、不值得信賴, 甚至算法犯錯(cuò)后還會(huì)引發(fā)更多的負(fù)面情緒(Lee, 2018)。
3" 人機(jī)信任校準(zhǔn)的途徑
人機(jī)信任校準(zhǔn)包括信任抑制與信任提升兩條途徑(見表1)。信任抑制是指當(dāng)機(jī)器人犯錯(cuò)卻未被察覺或意外做出正確決策后, 旨在降低個(gè)體不切實(shí)際的高信任水平的活動(dòng)(de Visser et al., 2020); 信任提升定義為在初始交互時(shí), 亦或是信任違背后, 旨在使信任方的較低信任信念和意愿更加積極的活動(dòng)(Kim et al., 2004)。下面將分別從機(jī)器人、個(gè)體、環(huán)境三方面介紹對(duì)應(yīng)的具體信任校準(zhǔn)策略。
3.1" 與機(jī)器人有關(guān)的信任校準(zhǔn)策略
透明度提升。提升機(jī)器人透明度既可以用于降低過度信任(de Visser et al., 2020), 同時(shí)也可以用于提升信任不足(Lyons et al., 2017)。但總體而言, 透明度常常用于糾正過度信任。透明度包括向用戶提供關(guān)于模型如何工作的相關(guān)信息(Bhatt et al., 2020), 以幫助他們理解該系統(tǒng)(Seong amp; Bisantz, 2008)。算法需要具備可理解性(Understandability), 讓用戶了解算法內(nèi)部的底層運(yùn)行機(jī)制, 信任并正確地使用算法系統(tǒng)。透明度還包括公開機(jī)器人的內(nèi)在言語(Inner speech), 把機(jī)器人做出決策的推理過程、動(dòng)機(jī)過程、目標(biāo)和行動(dòng)計(jì)劃展現(xiàn)給用戶(Chen et al., 2018; Geraci et al., 2021), 從而抑制信任。機(jī)器人也可以及時(shí)向用戶提供有關(guān)性能的反饋, 例如通過語音的方式傳達(dá)它對(duì)于決策正確與否的不確定性(Okamura amp; Yamada, 2020)。正如前文所述, 可預(yù)測(cè)性是機(jī)器人可靠性的重要組成部分之一。當(dāng)機(jī)器人性能不穩(wěn)定、不可預(yù)測(cè)時(shí), 個(gè)體就無法對(duì)機(jī)器人的可靠性進(jìn)行準(zhǔn)確評(píng)估。因此, 通過向用戶傳達(dá)(性能)不確定性, 暗示機(jī)器人可能在后續(xù)的交互過程中出現(xiàn)性能下降的情況, 有利于抑制過高信任。Beller等人(2013)通過駕駛模擬任務(wù)檢驗(yàn)不確定性(即自動(dòng)駕駛汽車在性能不確定的情況下出現(xiàn)一個(gè)帶有遲疑表情的emoji)在信任校準(zhǔn)中的作用。研究顯示, 與控制組相比, 不確定組能降低用戶對(duì)于自動(dòng)駕駛汽車的依賴, 提醒用戶為自動(dòng)化故障做好準(zhǔn)備, 并能促使用戶更主動(dòng)、更快地處理自動(dòng)化駕駛汽車的故障。不確定組的參與者也更能在駕駛?cè)蝿?wù)中集中注意力, 更不容易被其他無關(guān)刺激所干擾。該研究結(jié)果與Kunze等人(2019)基本一致, 不確定性反饋有助于用戶調(diào)整注意力的分配, 進(jìn)而校準(zhǔn)信任。但不確定性信息該如何呈現(xiàn)則需要設(shè)計(jì)師仔細(xì)斟酌, 因不確定性呈現(xiàn)雖會(huì)有利于信任校準(zhǔn), 但可能誘發(fā)更高的工作負(fù)載, 從而降低任務(wù)績(jī)效(Kunze et al., 2019)。
顯示信心指數(shù)亦是提升透明度的策略之一(de Visser et al., 2020)。信心指數(shù)即AI作出正確決策的概率, 理論上人們應(yīng)該在AI報(bào)告信心指數(shù)高的情況下依賴AI, 在信心指數(shù)低的情況下依賴自己的判斷。McGuirl和Sarter (2006)發(fā)現(xiàn), 如果自動(dòng)化能提供動(dòng)態(tài)更新的信心指數(shù)將有助于飛行員在任務(wù)分配和是否遵守自動(dòng)化系統(tǒng)的建議等方面做出更好的決策, 對(duì)系統(tǒng)準(zhǔn)確性的估計(jì)也更加精準(zhǔn)。類似的, 在人機(jī)交互中, 如果機(jī)器人能經(jīng)常向用戶提供它對(duì)完成某件任務(wù)的信心指數(shù), 個(gè)體或許就可以根據(jù)該指數(shù)合理分配任務(wù)給機(jī)器人。
解釋。可解釋的AI (Explainable AI, XAI)是人們正確校準(zhǔn)信任的必要組成部分(Adadi amp; Berrada, 2018), 亦是信任抑制的主要策略之一(Bu?inca et al., 2021)。XAI需要為用戶提供有意義解釋, 同時(shí)也可向用戶索要解釋(de Visser et al., 2020), 它通過讓用戶了解AI的決策過程, 以期AI給出錯(cuò)誤決策時(shí)能被用戶準(zhǔn)確識(shí)別并拒絕。人們之所以對(duì)機(jī)器人的期望過高, 機(jī)器人的“黑匣子” (Black box)屬性有可能是其中的重要因素。如果在人機(jī)交互之前能及時(shí)打開“黑匣子”, 或許能降低個(gè)體過高的信任水平, 并幫助個(gè)體建立起對(duì)于機(jī)器人的良好心理模型。Wang等人(2018)發(fā)現(xiàn), 解釋有利于校準(zhǔn)參與者的信任, 從而幫助參與者更好地做出決策。當(dāng)機(jī)器人沒有給出任何解釋時(shí), 參與者會(huì)過度信任機(jī)器人進(jìn)而導(dǎo)致決策失誤。相比之下, 當(dāng)機(jī)器人提供解釋時(shí), 參與者對(duì)機(jī)器人的依從率就降低了。除此之外, 適當(dāng)?shù)貍鬟_(dá)機(jī)器人的局限也可以進(jìn)一步糾正個(gè)體對(duì)于機(jī)器人高可靠性的期待, 例如機(jī)器人明確地告知用戶自己能夠執(zhí)行的任務(wù)和功能范圍, 從而避免個(gè)體出現(xiàn)濫用機(jī)器人的情況(de Visser et al., 2020)。
與此同時(shí), 人機(jī)交互中當(dāng)機(jī)器人出現(xiàn)錯(cuò)誤時(shí), 適當(dāng)?shù)慕忉層欣趥€(gè)體更好地了解錯(cuò)誤發(fā)生的機(jī)制, 并通過給出相關(guān)證據(jù)來加強(qiáng)解釋的說服力, 從而提升信任。解釋包括說明錯(cuò)誤發(fā)生的原因(Correia et al., 2018), 承認(rèn)錯(cuò)誤事件的發(fā)生并給出一個(gè)能夠推斷出因果關(guān)系的理由(Bhatt et al., 2020; Lyons et al., 2023)、提出解決問題的辦法(Hald et al., 2021)。解釋需要與用戶的知識(shí)背景相匹配(Adadi amp; Berrada, 2018; Kim amp; Hinds, 2006), 如果提供一個(gè)過于專業(yè)化的術(shù)語解釋, 反而會(huì)讓用戶一頭霧水, 并降低機(jī)器人的透明度。然而, 解釋有時(shí)也會(huì)弄巧成拙(Papenmeier et al., 2019), 它是否有效可能跟該錯(cuò)誤導(dǎo)致的后果是否嚴(yán)重有關(guān)。在Correia等人(2018)的研究中, 機(jī)器人與參與者合作完成拼湊七巧板的任務(wù)。當(dāng)機(jī)器人突然出現(xiàn)語音故障導(dǎo)致游戲暫停時(shí), 只有當(dāng)參與者和機(jī)器人可以繼續(xù)完成剩下的任務(wù)時(shí), 機(jī)器人的解釋才有效, 當(dāng)需要重啟游戲時(shí), 解釋就無用了, 甚至在這種情形下的解釋還會(huì)導(dǎo)致參與者的信任水平下降。
承諾。承諾適用于正直型違背或能力型違背, 前者側(cè)重于被信任方(Trustee)因?yàn)樗?她的誠(chéng)實(shí)品質(zhì)問題而造成信任方(Trustor)的信任下降, 后者則強(qiáng)調(diào)被信任方因能力不足而沒有達(dá)到信任方的期待造成信任下降(嚴(yán)瑜, 吳霞, 2016)。承諾不僅包括了信任違背之后機(jī)器人給予人類的承諾, 還包括了人類事先給予機(jī)器人的承諾。針對(duì)前一類, Esterwood和Robert (2022)發(fā)現(xiàn), 當(dāng)個(gè)體之前對(duì)機(jī)器人的積極態(tài)度高時(shí), 承諾對(duì)修復(fù)信任最有效。承諾通過保證個(gè)體對(duì)機(jī)器人所持的態(tài)度是正確的來強(qiáng)化個(gè)體的積極態(tài)度, 進(jìn)而減少認(rèn)知失調(diào), 更有利于信任修復(fù)。針對(duì)后一類, Sebo等人(2019)發(fā)現(xiàn), 如果人類與機(jī)器人在交互前事先進(jìn)行了互惠性承諾, 也就是以不傷害對(duì)方的前提下公平競(jìng)爭(zhēng), 即使在任務(wù)中機(jī)器人耍賴并欺騙了參與者, 事后相較于沒有做出事前互惠承諾的個(gè)體, 參與者仍報(bào)告了對(duì)機(jī)器人較高的信任。
道歉。道歉是人機(jī)交互中修復(fù)信任最常見的方法, 它適用于能力型信任違背(Quinn, 2018)。道歉被定義為承認(rèn)自己因信任違背行為所帶來的責(zé)任, 并表達(dá)遺憾(Kim et al., 2004)。道歉常常會(huì)跟歸因相聯(lián)系。例如Kim和Song (2021)發(fā)現(xiàn), 當(dāng)發(fā)生信任違背后, 相比于使用外部歸因道歉策略, 使用內(nèi)部歸因道歉策略的類人化虛擬智能體更能修復(fù)信任; 該結(jié)果恰恰與類機(jī)器虛擬智能體相反。當(dāng)機(jī)器人表達(dá)出類似于人類的情緒, 例如遺憾時(shí), 相較于沒有表現(xiàn)出遺憾的機(jī)器人, 參與者對(duì)其信任度急劇增加; 當(dāng)?shù)狼讣劝ㄟz憾的語言表達(dá), 又包括解釋時(shí), 信任水平的增長(zhǎng)尤為明顯(Kox et al., 2021)。同時(shí), 道歉時(shí)機(jī)也很重要。Robinette等人(2015)通過模擬火災(zāi)危機(jī)情境發(fā)現(xiàn), 在信任違背之后機(jī)器人的立刻道歉和解釋不能有效地修復(fù)信任, 而在危機(jī)時(shí)刻機(jī)器人同樣使用道歉和承諾時(shí), 大部分的參與者會(huì)選擇重新跟隨機(jī)器人前往緊急出口。但Quinn (2018)也質(zhì)疑道歉可能會(huì)因?yàn)闄C(jī)器人反復(fù)表達(dá)內(nèi)疚和感知的低真誠(chéng)而降低信任修復(fù)的有效性。
否認(rèn)。否認(rèn)對(duì)于修復(fù)正直型違背十分有效(Sebo et al., 2019)。否認(rèn)常包括否認(rèn)外部因果關(guān)系, 既不承認(rèn)任何責(zé)任, 也不表示遺憾(Kim et al., 2004)。否認(rèn)給予了信任違背者一個(gè)機(jī)會(huì)去反駁與質(zhì)疑, 而不是單純地承認(rèn)錯(cuò)誤。同時(shí), 它也表明一種沒有必要去改正行為的意向, 這可能會(huì)導(dǎo)致個(gè)體對(duì)違背者未來信任行為的擔(dān)憂(Kim et al., 2004)。但相對(duì)于道歉策略直接將機(jī)器人的失敗暴露于人類面前, 當(dāng)個(gè)體處于在高工作負(fù)載條件, 且無法驗(yàn)證機(jī)器人的正直性或無法厘清故障原因時(shí), 否認(rèn)可能是一種更安全的修復(fù)策略(Quinn, 2018)。有趣的是, 當(dāng)機(jī)器人出現(xiàn)正直型違背后給予否認(rèn), 雖然后續(xù)參與者報(bào)告的信任水平與其他條件無差異, 但是有60%的參與者會(huì)選擇在實(shí)驗(yàn)中對(duì)機(jī)器人進(jìn)行報(bào)復(fù)(Sebo et al., 2019)。
指責(zé)。指責(zé)是信任修復(fù)中風(fēng)險(xiǎn)較大的一種策略。與道歉相類似, 指責(zé)通常也會(huì)涉及到歸因問題, 且最好讓機(jī)器人在引入指責(zé)歸因時(shí), 將任務(wù)失敗歸結(jié)于機(jī)器人自己內(nèi)部的原因(Groom et al., 2010), 而不是外部原因(算法設(shè)計(jì)師、第三方算法、人類同伴)。信任違背之后, 相比于外部指責(zé), 通過內(nèi)部指責(zé)進(jìn)行解釋雖然在行為信任上不存在顯著的差異, 但是卻能讓參與者感知到更強(qiáng)的正直性和仁愛性(Jensen et al., 2019)。同樣, 也并不是所有的指責(zé)都有效, 如果這類指責(zé)僅強(qiáng)調(diào)是自己的錯(cuò)誤而不去指出發(fā)生錯(cuò)誤的原因, 那么這個(gè)時(shí)候機(jī)器人的指責(zé)歸因的引入亦會(huì)導(dǎo)致參與者的信任感知下降(Kaniarasu amp; Steinfeld, 2014), 因?yàn)橐粋€(gè)指責(zé)他人(尤其是參與者本身)的機(jī)器人會(huì)讓參與者感到憤怒, 但一個(gè)自怨自艾的機(jī)器人同樣也讓人覺得不被信任, 盡管它們很誠(chéng)實(shí)地指出了自己的錯(cuò)誤。
擬人化。擬人化是將人類特征、動(dòng)機(jī)、意向或心理狀態(tài)賦予非人對(duì)象的心理過程或者個(gè)體差異(許麗穎 等, 2017; Epley et al., 2007), 它可以用于提升人機(jī)信任。因算法常被知覺為冷冰冰、缺乏溫暖和體驗(yàn)性, 如果能在算法中添加一些與人類相似的高情感特征, 比如使用女性機(jī)器人(高溫暖與體驗(yàn)性的代表), 或許會(huì)緩解人們對(duì)機(jī)器人去人性化的感知(Borau et al., 2021)。Toader等人(2019)證實(shí), 與男性聊天機(jī)器人相比, 跟女性聊天機(jī)器人互動(dòng)的參與者對(duì)于個(gè)人信息的披露意愿更強(qiáng), 社交感知和服務(wù)滿意度也明顯更高。由于擬人化需要將人類特征投射到機(jī)器人上, 因此, 擬人化可能會(huì)讓參與者產(chǎn)生“機(jī)器人就像人類一樣容易出錯(cuò)”的認(rèn)知(Aroyo et al., 2021), 進(jìn)而增加信任彈性(Trust resilience), 幫助個(gè)體形成有關(guān)于機(jī)器人的心理模型(Ososky et al., 2013), 減緩錯(cuò)誤后參與者的信任下降速度(de Visser et al., 2016)。在一項(xiàng)研究中, 機(jī)器人的主要任務(wù)是給參與者遞四個(gè)雞蛋讓參與者能順利制作雞蛋卷。相比于另外兩個(gè)不能交流的機(jī)器人, 一個(gè)能交流、能表達(dá)自己情緒(例如掉落雞蛋之后做出委屈的表情)、但偶爾犯錯(cuò)(運(yùn)輸雞蛋過程不慎掉落一顆雞蛋)的機(jī)器人更受參與者喜愛, 甚至當(dāng)它犯錯(cuò)之后, 參與者對(duì)其的信任程度仍不亞于一個(gè)效率高(不掉落雞蛋)但沉默的機(jī)器人(Hamacher et al., 2016)。
然而, 擬人化也可能誘發(fā)過度信任。用戶可能會(huì)過度信任擬人化程度高的機(jī)器人, 因?yàn)楦邤M人化機(jī)器人往往會(huì)被知覺為更可靠、更仁愛、更誠(chéng)實(shí), 導(dǎo)致用戶對(duì)機(jī)器人產(chǎn)生一種錯(cuò)誤的熟悉感, 從而誘發(fā)對(duì)其類人的預(yù)期(Wagner et al., 2018)。因此, 對(duì)于初始信任較高的個(gè)體, 降低機(jī)器人的擬人化特征也是抑制信任的方法之一。
3.2" 與個(gè)體有關(guān)的信任校準(zhǔn)策略
增加接觸。增加接觸在一定程度上能改變個(gè)體對(duì)機(jī)器人的態(tài)度。在信任提升方面, 研究證實(shí)曝光效應(yīng)(Exposure effect)同樣存在于人機(jī)交互之中(Jessup et al., 2020; Wullenkord et al., 2016)。與機(jī)器人的面對(duì)面互動(dòng)會(huì)減弱人們對(duì)機(jī)器人的警惕(Haring et al., 2013), 減少不確定性和風(fēng)險(xiǎn)感知(Kraus, Scholz, Messner, et al., 2020), 增進(jìn)個(gè)體對(duì)于機(jī)器人的好感, 提升人機(jī)初始信任。有趣的是, 僅僅通過幫助機(jī)器人按下一個(gè)按鈕, 相較于沒有按鈕的參與者來說, 參與者對(duì)于機(jī)器人的信任水平也會(huì)更高(Ullman amp; Malle, 2017)。總而言之, 與機(jī)器人的實(shí)際接觸可能會(huì)降低個(gè)體對(duì)機(jī)器人的負(fù)面偏見和焦慮, 糾正個(gè)體過去對(duì)于機(jī)器人可能構(gòu)成威脅的不恰當(dāng)認(rèn)知, 最終提高個(gè)體未來與機(jī)器人繼續(xù)接觸的意圖(Wullenkord et al., 2016)。
接觸也可以在一定程度上改變個(gè)體對(duì)于算法不恰當(dāng)?shù)男蕾p, 從而降低過高信任。人機(jī)交互經(jīng)驗(yàn)已被證明與自動(dòng)化依賴息息相關(guān)(Goddard et al., 2012)。Haring等人(2013)的研究中, 與機(jī)器人交互前個(gè)體可能認(rèn)為機(jī)器人較為聰明, 但是當(dāng)真正與機(jī)器人交互之后, 參與者對(duì)機(jī)器人的擬人化感知、智力感知均有降低。該研究的結(jié)果在Wullenkord等人(2016)的研究中得到了重復(fù):參與者在人機(jī)交互前認(rèn)為機(jī)器人的情緒體驗(yàn)性較強(qiáng), 但是真正與機(jī)器人互動(dòng)之后, 他們便會(huì)逐步意識(shí)到機(jī)器人不是很先進(jìn), 可以體驗(yàn)到的情緒也比他們想象的要少; 相比之下, 控制組(即沒有與機(jī)器人交互過)的參與者仍然秉持著機(jī)器人能力較強(qiáng)的觀點(diǎn)。與機(jī)器人接觸過的實(shí)際經(jīng)驗(yàn)使個(gè)體對(duì)機(jī)器人能力認(rèn)知開始趨于正常化(Sanders et al., 2017), 從而達(dá)到校準(zhǔn)信任的效果。
降低期望。降低期望是人機(jī)信任抑制的方法之一。信任的動(dòng)態(tài)變化特點(diǎn)促使個(gè)體在交互過程中根據(jù)新信息不斷校準(zhǔn)對(duì)機(jī)器人性能的期望, 因此個(gè)體對(duì)于機(jī)器人的認(rèn)知也會(huì)隨著與機(jī)器人交互的深入而不斷更新 (Kraus, Scholz, Stiegemeier, et al., 2020)。Pop等人(2015)發(fā)現(xiàn), 對(duì)自動(dòng)化具有高期望的用戶雖然對(duì)于自動(dòng)化可靠性的變化更為敏感, 但卻不一定會(huì)具有較好的信任校準(zhǔn)能力。當(dāng)自動(dòng)化能力提高時(shí), 用戶的信任校準(zhǔn)較好, 而當(dāng)自動(dòng)化能力降低時(shí), 信任校準(zhǔn)較差。因此, 如果個(gè)體對(duì)機(jī)器人的期望較高, 事先預(yù)警(Forewarning)是比較有效的方法。通過預(yù)先警告該任務(wù)難度、提醒用戶自己可能在該任務(wù)中表現(xiàn)不佳(de Visser et al., 2020), 進(jìn)而幫助個(gè)體重新設(shè)定他們的期望(Lee et al., 2010)。
提高算法素養(yǎng)。算法素養(yǎng)(Algorithm literacy)主要包括四個(gè)方面:(1)用戶了解App以及平臺(tái)算法是如何被使用的; (2)用戶知道算法如何運(yùn)行; (3)用戶能夠批判性地評(píng)估算法做出的決策; (4)用戶能有效處理算法運(yùn)行過程中出現(xiàn)的問題(Dogruel et al., 2022)。如果個(gè)體具備良好的算法素養(yǎng)就可以與機(jī)器人順利交互, 并從機(jī)器人的解釋中提取新的知識(shí), 進(jìn)而改善心理模型(Naiseh et al., 2021)。算法素養(yǎng)可以通過學(xué)習(xí)提升, 例如在機(jī)器人用戶手冊(cè)中強(qiáng)調(diào)過度信任的風(fēng)險(xiǎn), 并列舉過度信任機(jī)器人的優(yōu)劣; 機(jī)器人運(yùn)行商可以開發(fā)機(jī)器人培訓(xùn)相關(guān)的課程, 幫助人們正確地了解機(jī)器人(Aroyo et al., 2021), 從而提升個(gè)體的自我信心, 降低對(duì)于機(jī)器人的高依從性。其次, 用戶同樣可以通過自我學(xué)習(xí)提高學(xué)習(xí)能力, 并更新他們對(duì)AI的知識(shí), 以最有效的方式來校準(zhǔn)信任。
3.3" 與情境有關(guān)的信任校準(zhǔn)策略
認(rèn)知干預(yù)、增加認(rèn)知資源。情境的特點(diǎn)會(huì)影響認(rèn)知資源負(fù)荷。首先, 在高風(fēng)險(xiǎn)與時(shí)間壓力下的個(gè)體往往會(huì)出現(xiàn)認(rèn)知資源過載的情況。根據(jù)認(rèn)知負(fù)荷理論(Cognitive load theory), 人的工作記憶能力是有限的, 而認(rèn)知負(fù)荷又分為內(nèi)在認(rèn)知負(fù)荷和外在認(rèn)知負(fù)荷。內(nèi)在認(rèn)知負(fù)荷主要是由學(xué)習(xí)任務(wù)產(chǎn)生的, 而外部認(rèn)知負(fù)荷則來自于與學(xué)習(xí)任務(wù)無關(guān)的其他來源, 例如環(huán)境(Sweller, 2011)。從該理論出發(fā), 人機(jī)交互過程中個(gè)體的認(rèn)知負(fù)荷較高容易導(dǎo)致個(gè)體無法準(zhǔn)確識(shí)別自動(dòng)化錯(cuò)誤; 而不斷監(jiān)督自動(dòng)化運(yùn)行情況可能會(huì)讓個(gè)體產(chǎn)生與任務(wù)無關(guān)的內(nèi)在認(rèn)知負(fù)荷(Lyell amp; Coiera, 2017), 認(rèn)知資源越少, 個(gè)體就越容易出現(xiàn)過度信任算法的傾向(Chien et al., 2016)。因此優(yōu)化人機(jī)交互環(huán)境或許有利于提高認(rèn)知資源利用率并抑制信任, 例如簡(jiǎn)化個(gè)體的用戶界面(Naiseh et al., 2023), 以清晰可理解的方式提供指示(Wickens, 1995)。
其次, 人們?cè)诳焖贈(zèng)Q策情境中也容易受到認(rèn)知啟發(fā)式的影響。在此基礎(chǔ)之上, Bu?inca等人(2021)提出了相應(yīng)的認(rèn)知干預(yù)策略。他們以認(rèn)知的雙重加工理論為切入點(diǎn), 指出人們的認(rèn)知過程包括雙重系統(tǒng), 第一重系統(tǒng)是啟發(fā)性思維階段(包括啟發(fā)式和心理捷徑), 即人們?yōu)闇p少認(rèn)知資源損耗而通過單一線索作出判斷與決策(Tam amp; Ho, 2005); 第二重系統(tǒng)是分析性思維階段。總體來說, 人們的大多數(shù)日常決策都是通過啟發(fā)性思維完成的, 分析性思維因其觸發(fā)緩慢、需要較多認(rèn)知資源而很少被激活。他們通過訓(xùn)練參與者進(jìn)行認(rèn)知強(qiáng)迫(Cognitive forcing), 或是要求參與者先于AI做出決定, 或是通過增加AI給出建議的時(shí)間去放緩決策過程, 又或是讓參與者選擇是否以及何時(shí)查看AI建議。研究結(jié)果表明, 認(rèn)知干預(yù)增加了參與者進(jìn)行分析性思維的認(rèn)知?jiǎng)訖C(jī), 進(jìn)一步減少了參與者對(duì)AI的過度依賴。
增強(qiáng)決策領(lǐng)域中機(jī)器人(算法)的優(yōu)勢(shì)。Hou和Jung (2021)認(rèn)為, 人類并不是一味地偏向于算法或人類決策, 個(gè)體實(shí)質(zhì)偏向的是算法或人類背后的專業(yè)能力。適度在算法決策背后注入專家力量有利于改善個(gè)體原先的消極信任態(tài)度。除此之外, 在不同的任務(wù)領(lǐng)域匹配不同外表的機(jī)器人可以幫助個(gè)體更好地接納機(jī)器人, 比如享樂主導(dǎo)的服務(wù)環(huán)境中, 個(gè)體對(duì)像兒童的高熱情服務(wù)機(jī)器人表現(xiàn)出更高的偏好, 而在功利主導(dǎo)的服務(wù)環(huán)境中, 他們對(duì)像成人的高能力服務(wù)機(jī)器人表現(xiàn)出更高的偏好(Liu et al., 2022)。人們通常厭惡在一些較為主觀的任務(wù)中使用算法, 但如果能在這些看似主觀的任務(wù)中強(qiáng)調(diào)一些可以用客觀事實(shí)解釋的部分, 也可以降低主觀任務(wù)的“主觀性”, 使參與者可以更好地接受算法決策。比如告知參與者歌曲的推薦(主觀任務(wù))一定程度上也可以由個(gè)體的人格特質(zhì)(客觀性)所決定, 這時(shí)個(gè)體對(duì)于算法決策的信任會(huì)增強(qiáng)(Castelo et al., 2019)。
4" 未來研究展望
人機(jī)交互已經(jīng)逐漸滲透到日常生活的方方面面, 而信任對(duì)于保持團(tuán)隊(duì)凝聚力至關(guān)重要(Perkins et al., 2021)。然而, 信任只有維持在恰當(dāng)?shù)乃讲拍艽龠M(jìn)有效的人機(jī)合作。一旦信任出現(xiàn)過高或過低的情況, 就會(huì)對(duì)人機(jī)合作造成一定的威脅, 因此需要對(duì)信任進(jìn)行校準(zhǔn)。目前針對(duì)人機(jī)信任校準(zhǔn)雖然已經(jīng)有較多研究成果的積累, 但是仍存在不足和值得改進(jìn)的地方。
4.1" 優(yōu)化校準(zhǔn)效果評(píng)估的測(cè)量方法
第一, 從測(cè)量方法上看, 在人機(jī)信任領(lǐng)域已經(jīng)有研究者開始采用內(nèi)隱方式考察個(gè)體對(duì)自動(dòng)化的信任(Merritt et al., 2013), 但仍限于個(gè)體信任校準(zhǔn)之前對(duì)自動(dòng)化/機(jī)器人的信任測(cè)量, 未涉及到個(gè)體在人機(jī)交互中出現(xiàn)了信任偏差并進(jìn)行信任校準(zhǔn)之后的后續(xù)內(nèi)隱信任測(cè)量。第二, 校準(zhǔn)信任之后研究者們往往以信任量表得分或信任行為等外顯信任指標(biāo)作為衡量信任修復(fù)/抑制效果的主要方法。我們認(rèn)為, 校準(zhǔn)信任之后不僅要關(guān)注個(gè)體的外顯信任態(tài)度, 同時(shí)也要關(guān)注個(gè)體的內(nèi)隱信任態(tài)度以更好地檢驗(yàn)校準(zhǔn)策略的有效性與實(shí)用性。以信任抑制為例, 未來不僅可以通過信任量表檢驗(yàn)抑制策略是否有效, 亦可通過內(nèi)隱聯(lián)想測(cè)驗(yàn)探討信任抑制后個(gè)體的內(nèi)隱信任水平是否也有降低, 對(duì)比信任抑制策略在降低外顯信任和內(nèi)隱信任之間的差異。
4.2" 揭示信任校準(zhǔn)的認(rèn)知神經(jīng)過程
以往人機(jī)信任的有關(guān)研究大多停留在行為實(shí)驗(yàn)上, 目前已經(jīng)有部分研究者開始從認(rèn)知神經(jīng)的視角去關(guān)注人機(jī)信任(Eloy et al., 2022; Oh et al., 2020; Walker et al., 2019; Yen amp; Chiang, 2021), 例如機(jī)器人出錯(cuò)后, 個(gè)體的內(nèi)側(cè)和右側(cè)背外側(cè)前額葉皮層內(nèi)觀察到神經(jīng)激活增加, 大腦的功能連接強(qiáng)度降低(Hopko amp; Mehta, 2022), 前扣帶皮層的負(fù)波增加(de Visser et al., 2018)。信任是一個(gè)連續(xù)的過程, 信任的建立、增長(zhǎng)、受損和消失會(huì)對(duì)信任關(guān)系中每個(gè)成員的目前及未來的行為方式產(chǎn)生強(qiáng)大而持久的影響(Hancock et al., 2011)。同樣的, 完整的信任校準(zhǔn)周期往往會(huì)經(jīng)歷信任建立-信任增長(zhǎng)/受損-信任校準(zhǔn)三個(gè)階段。以往人機(jī)信任的認(rèn)知神經(jīng)研究比較關(guān)注前兩個(gè)階段, 然而在人機(jī)信任校準(zhǔn)過程中, 尤其是對(duì)個(gè)體的偏差信任水平進(jìn)行信任提升或信任抑制之后, 涉及到哪些認(rèn)知神經(jīng)過程還鮮有研究, 而這一部分恰恰對(duì)于人機(jī)信任校準(zhǔn)極為重要, 不僅可以揭示人機(jī)信任校準(zhǔn)的生理機(jī)制, 也可為后續(xù)的信任校準(zhǔn)策略優(yōu)化提供思路與借鑒。未來研究者可利用生理指標(biāo)實(shí)時(shí)、全程監(jiān)測(cè)個(gè)體從信任建立之初到信任校準(zhǔn)后的認(rèn)知神經(jīng)活動(dòng)變化過程, 進(jìn)一步從生理層面揭示個(gè)體的信任動(dòng)態(tài)化發(fā)展。
4.3" 結(jié)合信任發(fā)展階段對(duì)信任校準(zhǔn)策略進(jìn)行精細(xì)化研究
信任本身呈現(xiàn)動(dòng)態(tài)發(fā)展變化的特點(diǎn), 以往對(duì)于信任校準(zhǔn)的研究大多是基于靜態(tài)的橫斷面研究, 僅考察了在當(dāng)前階段中如何提升或抑制個(gè)體的信任水平, 并未以動(dòng)態(tài)的視角考察信任水平發(fā)展變化的影響因素。以信任過高為例, 人機(jī)交互之前個(gè)體可能先入為主地持有對(duì)于算法的消極態(tài)度, 并認(rèn)為算法的能力較低。如果在后續(xù)交互過程中個(gè)體感知到了算法的可靠性, 這種原先對(duì)于算法低能力的預(yù)期就會(huì)被打破, 期望落差就有可能更會(huì)促使人們趨近算法, 并認(rèn)為算法更加值得信任(Washburn et al., 2020)。Filiz等人(2021)發(fā)現(xiàn), 在40輪的股價(jià)預(yù)測(cè)實(shí)驗(yàn)中, 參與者可以選擇相信自己或相信算法, 但最后的報(bào)酬會(huì)與預(yù)測(cè)正確率掛鉤。雖然有的參與者剛開始選擇相信自己, 但隨著他們發(fā)現(xiàn)自己的預(yù)測(cè)準(zhǔn)確率低于算法時(shí), 他們也會(huì)逐漸地選擇相信算法。該結(jié)果在另一項(xiàng)研究中得到了重復(fù):當(dāng)機(jī)器人記者所撰寫的新聞質(zhì)量超過了人們的預(yù)先預(yù)期時(shí), 這種對(duì)于機(jī)器人記者期望的積極失驗(yàn)(Positive disconfirmation)會(huì)讓人們更愿意接受機(jī)器人記者, 并且也會(huì)更加滿意(Kim amp; Kim, 2021)。在這種情況下人們對(duì)機(jī)器人持有的高信任水平, 與個(gè)體在人機(jī)交互過程中的逐漸積累起來的高信任水平可能有所差異, 因此應(yīng)該采用不同的信任抑制策略, 但以往研究并未加以區(qū)分。未來可進(jìn)一步針對(duì)人機(jī)信任偏差產(chǎn)生的不同原因, 分別比較不同的校準(zhǔn)策略的有效性, 探索最適宜某種信任偏差的校準(zhǔn)策略。
4.4" 探討信任校準(zhǔn)的邊界條件
首先, 目前人機(jī)信任的研究幾乎都在考察人對(duì)類人機(jī)器人、機(jī)械化機(jī)器人信任水平的發(fā)展變化, 而較少關(guān)注動(dòng)物型機(jī)器人在信任校準(zhǔn)中的作用, 尤其是“萌萌的”動(dòng)物機(jī)器人, 可能會(huì)誘發(fā)個(gè)體對(duì)其天真、善良等特質(zhì)的自動(dòng)推斷, 喚起人們的積極情緒(許麗穎 等, 2019)。一個(gè)有著圓圓的、大大的眼睛的娃娃臉機(jī)器人也可能被認(rèn)為更可信(Song amp; Luximon, 2020)。動(dòng)物型機(jī)器人比機(jī)械化機(jī)器人更受人喜愛(Li et al., 2010)。人類獨(dú)特性或是人們對(duì)機(jī)器人的初始信任水平較低的原因之一, 既然娃娃臉能夠降低人們對(duì)其威脅性的評(píng)判(許麗穎 等, 2019), 那么萌萌的機(jī)器人也許可以改變個(gè)體的偏見從而提高初始信任水平; 同樣發(fā)生信任違背之后, 萌萌的動(dòng)物型機(jī)器人可能也會(huì)使得信任水平下降的更慢, 更容易被修復(fù)。對(duì)于信任抑制來講, 與類人機(jī)器人相比, 動(dòng)物型機(jī)器人能較好地通過熟悉度降低個(gè)體期望, 同時(shí)避免了類人機(jī)器人設(shè)計(jì)中可能存在的種族偏見等問題(L?ffler et al., 2020), 進(jìn)而降低個(gè)體過高的信任; 人們也可能會(huì)因?yàn)轭惾藱C(jī)器人的外表對(duì)其產(chǎn)生不恰當(dāng)?shù)恼J(rèn)知, 反觀動(dòng)物機(jī)器人或許會(huì)降低個(gè)體對(duì)其心理模型的推論, 從而抑制信任。未來可進(jìn)一步對(duì)比類人機(jī)器人與動(dòng)物型機(jī)器人在信任校準(zhǔn)方面的作用, 但需要注意的是, 動(dòng)物型機(jī)器人最好具備較高或較低的動(dòng)物相似度, 否則容易出現(xiàn)恐怖谷效應(yīng)(L?ffler et al., 2020)。
其次, 目前人機(jī)信任領(lǐng)域已經(jīng)有研究者開始關(guān)注個(gè)體在群體之中、而不是單獨(dú)與機(jī)器人交互時(shí)信任水平的變化發(fā)展(Montague amp; Xu, 2012; Montague et al., 2014; Xu amp; Montague, 2013)。例如Martinez等人(2023)考察了個(gè)體單獨(dú)和作為群體成員(2~3人)在點(diǎn)餐前、點(diǎn)餐后, 從機(jī)器人那里取到外賣三個(gè)階段中對(duì)于送餐機(jī)器人的信任與接受度, 結(jié)果發(fā)現(xiàn)隨著與機(jī)器人接觸增多, 個(gè)體的參與者對(duì)于機(jī)器人的信任水平也在逐漸增長(zhǎng), 但是群體中參與者的信任卻并未隨著接觸的增加而增長(zhǎng), 反而有更多的變異。也有研究者探索了從眾在人機(jī)信任中的影響, 發(fā)現(xiàn)相較于與直接和機(jī)器人溝通從而建立起對(duì)于機(jī)器人的信任, 人們更喜歡聽從其他人對(duì)于機(jī)器人的評(píng)價(jià)并在此基礎(chǔ)之上作出信任判斷(Volante et al., 2019)。這兩項(xiàng)研究初步探索了個(gè)體在群體中可能會(huì)出現(xiàn)的信任水平變化, 但未涉及如何校準(zhǔn)群體中個(gè)體的信任水平。未來可通過跨文化的方式比較中西方參與者在群體之間的人機(jī)信任水平差異并進(jìn)一步考察如何在群體內(nèi)進(jìn)行信任校準(zhǔn), 亦可比較個(gè)體信任偏差與群體信任偏差的差異與共性, 探索適宜群體信任偏差校準(zhǔn)的策略。
最后, 信任校準(zhǔn)能否成功其實(shí)也取決于個(gè)體的因素, 校準(zhǔn)策略的有效性可能存在個(gè)體差異。例如Lee等人(2010)發(fā)現(xiàn), 對(duì)于不同服務(wù)導(dǎo)向的個(gè)體可以采用不同的信任修復(fù)策略, 持有關(guān)系導(dǎo)向的參與者更喜歡道歉這類信任修復(fù)策略, 而持有功利導(dǎo)向的參與者更喜歡賠償這類實(shí)質(zhì)行為上的信任修復(fù)策略。未來可進(jìn)一步根據(jù)用戶自身的特點(diǎn), 在人機(jī)交互過程中捕捉不同個(gè)體與信任相關(guān)的行為進(jìn)行建模(Pynadath et al., 2019), 以個(gè)性化的方式校準(zhǔn)信任。
致謝:感謝西南科技大學(xué)趙文老師為英文摘要潤(rùn)色, 感謝兩位外審專家和編委為本文修改提出了寶貴的意見和建議。
參考文獻(xiàn)
高在峰, 李文敏, 梁佳文, 潘晗希, 許為, 沈模衛(wèi). (2021). 自動(dòng)駕駛車中的人機(jī)信任. 心理科學(xué)進(jìn)展, 29(12), 2172?2183.
許麗穎, 喻豐, 鄔家驊, 韓婷婷, 趙靚. (2017). 擬人化: 從“它”到“他”. 心理科學(xué)進(jìn)展, 25(11), 1942?1954.
許麗穎, 喻豐, 周愛欽, 楊沈龍, 丁曉軍. (2019). 萌: 感知與后效. 心理科學(xué)進(jìn)展, 27(4), 689?699.
嚴(yán)瑜, 吳霞. (2016). 從信任違背到信任修復(fù): 道德情緒的作用機(jī)制. 心理科學(xué)進(jìn)展, 24(4), 633?642.
楊正宇, 王重鳴, 謝小云. (2003). 團(tuán)隊(duì)共享心理模型研究新進(jìn)展. 人類工效學(xué), 9(3), 34?37.
樂國(guó)安, 韓振華. (2009). 信任的心理學(xué)研究與展望. 西南大學(xué)學(xué)報(bào)(社會(huì)科學(xué)版), 35(2), 1?5.
Adadi, A., amp; Berrada, M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access, 6, 52138?52160.
Alarcon, G. M., Gibson, A. M., amp; Jessup, S. A. (2020, September). Trust repair in performance, process, and purpose factors of human-robot trust. In 2020 IEEE International Conference on Human-Machine Systems (ICHMS) (pp. 1?6). Rome, Italy.
Ali, A., Tilbury, D. M., amp; Jr, L. R. (2022). Considerations for task allocation in human-robot teams. arXiv preprint arXiv:2210.03259.
Aroyo, A. M., de Bruyne, J., Dheu, O., Fosch-Villaronga, E., Gudkov, A., Hoch, H., ... Tamò-Larrieux, A. (2021). Overtrusting robots: Setting a research agenda to mitigate overtrust in automation. Paladyn, Journal of Behavioral Robotics, 12(1), 423?436.
Bainbridge, W. A., Hart, J. W., Kim, E. S., amp; Scassellati, B. (2011). The benefits of interactions with physically present robots over video-displayed agents. International Journal of Social Robotics, 3, 41?52.
Barfield, J. K. (2021, August). Self-disclosure of personal information, robot appearance, and robot trustworthiness. In 2021 30th IEEE International Conference on Robot amp; Human Interactive Communication (RO-MAN) (pp. 67?72).Vancouver, BC, Canada.
Beller, J., Heesen, M., amp; Vollrath, M. (2013). Improving the driver-automation interaction: An approach using automation uncertainty. Human Factors, 55(6), 1130?1141.
Bhatt, U., Xiang, A., Sharma, S., Weller, A., Taly, A., Jia, Y., ... Eckersley, P. (2020, January). Explainable machine learning in deployment. In Proceedings of the 2020 conference on fairness, accountability, and transparency (pp. 648?657). https://doi.org/10.1145/3351095.3375624
Biswas, M., amp; Murray, J. C. (2015, September). Towards an imperfect robot for long-term companionship: Case studies using cognitive biases. In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 5978?5983). Hamburg, Germany.
Borau, S., Otterbring, T., Laporte, S., amp; Fosso Wamba, S. (2021). The most human bot: Female gendering increases humanness perceptions of bots and acceptance of AI. Psychology amp; Marketing, 38(7), 1052?1068.
Borenstein, J., Wagner, A. R., amp; Howard, A. (2018). Overtrust of pediatric health-care robots: A preliminary survey of parent perspectives. IEEE Robotics amp; Automation Magazine, 25(1), 46?54.
Breazeal, C. (2003). Toward sociable robots. Robotics and Autonomous Systems, 42(3-4), 167?175.
Bu?inca, Z., Malaya, M. B., amp; Gajos, K. Z. (2021). To trust or to think: Cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making. Proceedings of the ACM on Human-Computer Interaction, 5, CSCW1, 1?21.
Castelo, N., Bos, M. W., amp; Lehmann, D. R. (2019). Task-dependent algorithm aversion. Journal of Marketing Research, 56(5), 809?825.
Chen, J. Y., Lakhmani, S. G., Stowers, K., Selkowitz, A. R., Wright, J. L., amp; Barnes, M. (2018). Situation awareness- based agent transparency and human-autonomy teaming effectiveness. Theoretical Issues in Ergonomics Science, 19(3), 259?282.
Chiarella, S. G., Torromino, G., Gagliardi, D. M., Rossi, D., Babiloni, F., amp; Cartocci, G. (2022). Investigating the negative bias towards Artificial Intelligence: Effects of prior assignment of AI-authorship on the aesthetic appreciation of abstract paintings. Computers in Human Behavior, 137(C), 107406.
Chien, S. Y., Lewis, M., Sycara, K., Liu, J. S., amp; Kumru, A. (2016, October). Influence of cultural factors in dynamic trust in automation. In 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC) (pp. 2884?2889). Budapest, Hungary.
Correia, F., Guerra, C., Mascarenhas, S., Melo, F. S., amp; Paiva, A. (2018, July). Exploring the impact of fault justification in human-robot trust. In Proceedings of the 17th international conference on autonomous agents and multiagent systems (pp. 507?513). Stockholm, Sweden.
Cymek, D. H., Truckenbrodt, A., amp; Onnasch, L. (2023). Lean back or lean in? Exploring social loafing in human- robot teams. Frontiers in Robotics and AI, 10, 1249252, doi: 10.3389/frobt.2023.1249252.
de Visser, E. J., Beatty, P. J., Estepp, J. R., Kohn, S., Abubshait, A., Fedota, J. R., amp; McDonald, C. G. (2018). Learning from the slips of others: Neural correlates of trust in automated agents. Frontiers in Human Neuroscience, 12, 309.
de Visser, E. J., Monfort, S. S., McKendrick, R., Smith, M. A., McKnight, P. E., Krueger, F., amp; Parasuraman, R. (2016). Almost human: Anthropomorphism increases trust resilience in cognitive agents. Journal of Experimental Psychology: Applied, 22(3), 331?349.
de Visser, E. J., Peeters, M. M., Jung, M. F., Kohn, S., Shaw, T. H., Pak, R., amp; Neerincx, M. A. (2020). Towards a theory of longitudinal trust calibration in human–robot teams. International Journal of Social Robotics, 12(2), 459?478.
Demir, K. A., D?ven, G., amp; Sezen, B. (2019). Industry 5.0 and human-robot co-working. Procedia Computer Science, 158, 688?695.
Dietvorst, B. J., amp; Bharti, S. (2020). People reject algorithms in uncertain decision domains because they have diminishing sensitivity to forecasting error. Psychological Science, 31(10), 1302?1314.
Dietvorst, B. J., Simmons, J. P., amp; Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114?126.
Dijkstra, J. J. (1999). User agreement with incorrect expert system advice. Behaviour amp; Information Technology, 18(6), 399?411.
Dogruel, L., Masur, P., amp; Joeckel, S. (2022). Development and validation of an algorithm literacy scale for internet users. Communication Methods and Measures, 16(2), 115?133.
Dzindolet, M. T., Peterson, S. A., Pomranky, R. A., Pierce, L. G., amp; Beck, H. P. (2003). The role of trust in automation reliance. International Journal of Human-Computer Studies, 58(6), 697?718.
Dzindolet, M. T., Pierce, L. G., Beck, H. P., amp; Dawe, L. A. (2002). The perceived utility of human and automated aids in a visual detection task. Human Factors, 44(1), 79?94.
Ehsan, U., Passi, S., Liao, Q. V., Chan, L., Lee, I., Muller, M., amp; Riedl, M. O. (2021). The who in explainable AI: How AI background shapes perceptions of AI explanations. arXiv preprint, arXiv:2107.13509.
Eloy, L., Doherty, E. J., Spencer, C. A., Bobko, P., amp; Hirshfield, L. (2022). Using fNIRS to identify transparency- and reliability-sensitive markers of trust across multiple timescales in collaborative human-human-agent triads. Frontiers in Neuroergonomics, 3, 838625.
Epley, N., Waytz, A., amp; Cacioppo, J. T. (2007). On seeing human: A three-factor theory of anthropomorphism. Psychological Review, 114(4), 864?886.
Esterwood, C., amp; Robert, L. P. (2021, August). Do you still trust me? Human-robot trust repair strategies. Proceedings of 30th IEEE International Conference on Robot and Human Interactive Communication. Vancouver, BC, Canada.
Esterwood, C., amp; Robert, L. P. (2022, March). Having the right attitude: How attitude impacts trust repair in human-robot interaction. In 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 332?341). Sapporo, Japan.
Filiz, I., Judek, J. R., Lorenz, M., amp; Spiwoks, M. (2021). Reducing algorithm aversion through experience. Journal of Behavioral and Experimental Finance, 31, 100524.
Formosa, P., Rogers, W., Griep, Y., Bankins, S., amp; Richards, D. (2022). Medical AI and human dignity: Contrasting perceptions of human and artificially intelligent (AI) decision making in diagnostic and medical resource allocation contexts. Computers in Human Behavior, 133, 107296.
Geraci, A., D’Amico, A., Pipitone, A., Seidita, V., amp; Chella, A. (2021). Automation inner speech as an anthropomorphic feature affecting human trust: Current issues and future directions. Frontiers in Robotics and AI, 8, 620026.
Goddard, K., Roudsari, A., amp; Wyatt, J. C. (2012). Automation bias: A systematic review of frequency, effect mediators, and mitigators. Journal of the American Medical Informatics Association, 19(1), 121?127.
Groom, V., Chen, J., Johnson, T., Kara, F. A., amp; Nass, C. (2010, March). Critic, compatriot, or chump? Responses to robot blame attribution. In 2010 5th ACM/IEEE international conference on human-robot interaction (HRI) (pp. 211?217). IEEE.
Hald, K., Weitz, K., André, E., amp; Rehm, M. (2021, November). “An Error Occurred!” Trust repair with virtual robot using levels of mistake explanation. In Proceedings of the 9th International Conference on Human-Agent Interaction (pp. 218?226). Virtual Event Japan.
Hamacher, A., Bianchi-Berthouze, N., Pipe, A. G., amp; Eder, K. (2016, August). Believing in BERT: Using expressive communication to enhance trust and counteract operational error in physical Human-robot interaction. In 2016 25th IEEE international symposium on robot and human interactive communication (RO-MAN) (pp. 493? 500). New York.
Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y., de Visser, E. J., amp; Parasuraman, R. (2011). A meta- analysis of factors affecting trust in human-robot interaction. Human Factors, 53(5), 517?527.
Hancock, P. A., Kessler, T. T., Kaplan, A. D., Brill, J. C., amp; Szalma, J. L. (2021). Evolving trust in robots: Specification through sequential and comparative meta-analyses. Human Factors, 63(7), 1196?1229.
Haring, K. S., Matsumoto, Y., amp; Watanabe, K. (2013). How do people perceive and trust a lifelike robot. In Proceedings of the world congress on engineering and computer science (pp. 425?430). San Francisco, USA.
Haring, K. S., Satterfield, K. M., Tossell, C. C., de Visser, E. J., Lyons, J. R., Mancuso, V. F., ... Funke, G. J. (2021). Robot authority in human-robot teaming: Effects of human-likeness and physical embodiment on compliance. Frontiers in Psychology, 12, 625713.
Hoff, K. A., amp; Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57(3), 407?434.
Hopko, S. K., amp; Mehta, R. K. (2022). Trust in shared-space collaborative robots: Shedding light on the human brain. Human Factors, 66(2). https://doi.org/10.1177/00187208 221109039
Hou, Y. T. Y., amp; Jung, M. F. (2021). Who is the expert? Reconciling algorithm aversion and algorithm appreciation in AI-supported decision making. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2), 477.
Jensen, T., Albayram, Y., Khan, M. M. H., Fahim, M. A. A., Buck, R., amp; Coman, E. (2019, June). The apple does fall far from the tree: User separation of a system from its developers in human-automation trust repair. In Proceedings of the 2019 on Designing Interactive Systems Conference (pp. 1071?1082). San Diego, CA, USA.
Jessup, S. A., Gibson, A., Capiola, A. A., Alarcon, G. M., amp; Borders, M. (2020, January). Investigating the effect of trust manipulations on affect over time in human-human versus human-robot interactions. Proceedings of the 53rd Hawaii International Conference on System Sciences (pp. 1?10).
Jung, Y., amp; Lee, K. M. (2004). Effects of physical embodiment on social presence of social robots. Proceedings of PRESENCE, 80?87.
Kaniarasu, P., amp; Steinfeld, A. M. (2014, August). Effects of blame on trust in human robot interaction. In The 23rd IEEE international symposium on robot and human interactive communication (pp. 850?855). Edinburgh, Scotland, UK.
Khavas, Z. R. (2021). A review on trust in human-robot interaction. arXiv preprint, arXiv:2105.10045.
Khavas, Z. R., Ahmadzadeh, S. R., amp; Robinette, P. (2020, November). Modeling trust in human-robot interaction: A survey. In Social Robotics: 12th International Conference, ICSR (pp. 529?541). https://doi.org/10.1007/978-3-030- 62056-1_44
Kim, D., amp; Kim, S. (2021). A model for user acceptance of robot journalism: Influence of positive disconfirmation and uncertainty avoidance. Technological Forecasting and Social Change, 163, 120448.
Kim, P. H., Dirks, K. T., amp; Cooper, C. D. (2009). The repair of trust: A dynamic bilateral perspective and multilevel conceptualization. Academy of Management Review, 34(3), 401?422.
Kim, P. H., Ferrin, D. L., Cooper, C. D., amp; Dirks, K. T. (2004). Removing the shadow of suspicion: The effects of apology versus denial for repairing competence-versus integrity-based trust violations. Journal of Applied Psychology, 89(1), 104?118.
Kim, T., amp; Hinds, P. (2006, September). Who should I blame? Effects of autonomy and transparency on attributions in human-robot interaction. In ROMAN 2006-The 15th IEEE international symposium on robot and human interactive communication (pp. 80?85). Hatfield, UK.
Kim, T., amp; Song, H. (2021). How should intelligent agents apologize to restore trust? Interaction effects between anthropomorphism and apology attribution on trust repair. Telematics and Informatics, 61, 101595.
Kox, E. S., Kerstholt, J. H., Hueting, T. F., amp; de Vries, P. W. (2021). Trust repair in human-agent teams: The effectiveness of explanations and expressing regret. Autonomous Agents and Multi-Agent Systems, 35(2), 30.
Kraus, J., Scholz, D., Messner, E. M., Messner, M., amp; Baumann, M. (2020). Scared to trust?–Predicting trust in highly automated driving by depressiveness, negative self-evaluations and state anxiety. Frontiers in Psychology, 10, 2917, doi: 10.3389/fpsyg.2019.02917.
Kraus, J., Scholz, D., Stiegemeier, D., amp; Baumann, M. (2020). The more you know: Trust dynamics and calibration in highly automated driving and the effects of take-overs, system malfunction, and system transparency. Human Factors, 62(5), 718?736.
Kundinger, T., Wintersberger, P., amp; Riener, A. (2019, May). (Over) Trust in automated driving: The sleeping pill of tomorrow? In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1?6). Glasgow, Scotland UK.
Kunze, A., Summerskill, S. J., Marshall, R., amp; Filtness, A. J. (2019). Automation transparency: Implications of uncertainty communication for human-automation interaction and interfaces. Ergonomics, 62(3), 345?360.
Kwon, J. H., Jung, S. H., Choi, H. J., amp; Kim, J. (2021). Antecedent factors that affect restaurant brand trust and brand loyalty: Focusing on US and Korean consumers. Journal of Product amp; Brand Management, 30(7), 990? 1015.
Lee, J. D., amp; Kolodge, K. (2020). Exploring trust in self- driving vehicles through text analysis. Human Factors, 62(2), 260?277.
Lee, J. D., amp; Moray, N. (1992). Trust, control strategies and allocation of function in human-machine systems. Ergonomics, 35(10), 1243?1270.
Lee, J. D., amp; Moray, N. (1994). Trust, self-confidence, and operators’ adaptation to automation. International Journal of Human-Computer Studies, 40(1), 153?184.
Lee, J. D., amp; See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50?80.
Lee, M. K. (2018). Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data amp; Society, 5(1), 1?16.
Lee, M. K., Kiesler, S., Forlizzi, J., Srinivasa, S., amp; Rybski, P. (2010, March). Gracefully mitigating breakdowns in robotic services. In 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 203? 210). Osaka, Japan.
Lee, S. L., Lau, I. Y. M., Kiesler, S., amp; Chiu, C. Y. (2005, April). Human mental models of humanoid robots. In Proceedings of the 2005 IEEE international conference on robotics and automation (pp. 2767?2772). Barcelona, Spain.
Li, D., Rau, P. P., amp; Li, Y. (2010). A cross-cultural study: Effect of robot appearance and task. International Journal of Social Robotics, 2, 175?186.
Liu, X. S., Yi, X. S., amp; Wan, L. C. (2022). Friendly or competent? The effects of perception of robot appearance and service context on usage intention. Annals of Tourism Research, 92, 103324.
L?ffler, D., D?rrenb?cher, J., amp; Hassenzahl, M. (2020, March). The uncanny valley effect in zoomorphic robots: The U-shaped relation between animal likeness and likeability. In Proceedings of the 2020 ACM/IEEE international conference on human-robot interaction (pp. 261?270). Cambridge, United Kingdom.
Logg, J. M., Minson, J. A., amp; Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151, 90?103.
Lyell, D., amp; Coiera, E. (2017). Automation bias and verification complexity: A systematic review. Journal of the American Medical Informatics Association, 24(2), 423? 431.
Lyons, J. B., Hamdan, I., amp; Vo, T. Q. (2023). Explanations and trust: What happens to trust when a robot partner does something unexpected? Computers in Human Behavior, 138, 107473.
Lyons, J. B., Nam, C. S., Jessup, S. A., Vo, T. Q., amp; Wynne, K. T. (2020, September). The role of individual differences as predictors of trust in autonomous security robots. In 2020 IEEE International Conference on Human- Machine Systems (ICHMS) (pp. 1?5). Rome, Italy.
Lyons, J. B., Sadler, G. G., Koltai, K., Battiste, H., Ho, N. T., Hoffmann, L. C., ... Shively, R. (2017). Shaping trust through transparent design: Theoretical and experimental guidelines. In: Savage-Knepshield, P., amp; Chen, J (Eds.), Advances in Human Factors in Robots and Unmanned Systems (pp.127?136). Springer International Publishing.
Madhavan, P., amp; Wiegmann, D. A. (2007). Similarities and differences between human–human and human– automation trust: An integrative review. Theoretical Issues in Ergonomics Science, 8(4), 277?301.
Martinez, J. E., VanLeeuwen, D., Stringam, B. B., amp; Fraune, M. R. (2023, March). Hey?! What did you think about that robot? Groups polarize users’ acceptance and trust of food delivery robots. In Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction (pp. 417?427). https://doi.org/10.1145/3568162.3576984
Mayer, R. C., Davis, J. H., amp; Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20(3), 709?734.
McGuirl, J. M., amp; Sarter, N. B. (2006). Supporting trust calibration and the effective use of decision aids by presenting dynamic system confidence information. Human Factors, 48(4), 656?665.
Meng, J., amp; Berger, B. K. (2019). The impact of organizational culture and leadership performance on PR professionals’ job satisfaction: Testing the joint mediating effects of engagement and trust. Public Relations Review, 45(1), 64?75.
Merritt, S. M., Heimbaugh, H., LaChapell, J., amp; Lee, D. (2013). I trust it, but I don’ t know why: Effects of implicit attitudes toward automation on trust in an automated system. Human Factors, 55(3), 520?534.
Mirnig, N., Stollnberger, G., Miksch, M., Stadler, S., Giuliani, M., amp; Tscheligi, M. (2017). To err is robot: How humans assess and act toward an erroneous social robot. Frontiers in Robotics and AI, 4, 21.
Montague, E., amp; Xu, J. (2012). Understanding active and passive users: The effects of an active user using normal, hard and unreliable technologies on user assessment of trust in technology and co-user. Applied Ergonomics, 43(4), 702?712.
Montague, E., Xu, J., amp; Chiou, E. (2014). Shared experiences of technology and trust: An experimental study of physiological compliance between active and passive users in technology-mediated collaborative encounters. IEEE Transactions on Human-Machine Systems, 44(5), 614?624.
Mosier, K. L., amp; Skitka, L. J. (1996). Human decision makers and automated decision aids: Made for each other? In Parasuraman, R., amp; Mouloua. M (Eds.), Automation and human performance (pp. 201?220). CRC Press.
Müller, R., Schischke, D., Graf, B., amp; Antoni, C. H. (2023). How can we avoid information overload and techno-frustration as a virtual team? The effect of shared mental models of information and communication technology on information overload and techno-frustration. Computers in Human Behavior, 138, 107438.
Naiseh, M., Al-Thani, D., Jiang, N., amp; Ali, R. (2021). Explainable recommendation: When design meets trust calibration. World Wide Web, 24(5), 1857?1884.
Naiseh, M., Al-Thani, D., Jiang, N., amp; Ali, R. (2023). How the different explanation classes impact trust calibration: The case of clinical decision support systems. International Journal of Human-Computer Studies, 169, 102941.
Oh, S., Seong, Y., Yi, S., amp; Park, S. (2020). Neurological measurement of human trust in automation using electroencephalogram. International Journal of Fuzzy Logic and Intelligent Systems, 20(4), 261?271.
Okamura, K., amp; Yamada, S. (2020). Adaptive trust calibration for human-AI collaboration. Plos One, 15(2), e0229132. https://doi.org/10.1371/journal.pone.0229132
Okuoka, K., Enami, K., Kimoto, M., amp; Imai, M. (2022). Multi-device trust transfer: Can trust be transferred among multiple devices? Frontiers in Psychology, 13, 920844.
Onnasch, L., amp; Panayotidis, T. (2020, December). Social loafing with robots-An empirical investigation. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 64(1), 97?101.
Ososky, S., Schuster, D., Phillips, E., amp; Jentsch, F. G. (2013, March). Building appropriate trust in human-robot teams. In Proceedings of the 2013 AAAI Spring Symposium (pp. 60?65). Palo Alto, CA, USA.
Papenmeier, A., Englebienne, G., amp; Seifert, C. (2019). How model accuracy and explanation fidelity influence user trust. arXiv preprint, arXiv:1907.12652.
Parasuraman, R., amp; Manzey, D. H. (2010). Complacency and bias in human use of automation: An attentional integration. Human Factors, 52(3), 381?410.
Parasuraman, R., amp; Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors, 39(2), 230? 253.
Perkins, R., Khavas, Z. R., amp; Robinette, P. (2021). Trust calibration and trust respect: A method for building team cohesion in human robot teams. arXiv preprint, arXiv: 2110.06809.
Petrocchi, S., Iannello, P., Lecciso, F., Levante, A., Antonietti, A., amp; Schulz, P. J. (2019). Interpersonal trust in doctor-patient relation: Evidence from dyadic analysis and association with quality of dyadic communication. Social Science amp; Medicine, 235, 112391.
Pop, V. L., Shrewsbury, A., amp; Durso, F. T. (2015). Individual differences in the calibration of trust in automation. Human Factors, 57(4), 545?556.
Pynadath, D. V., Wang, N., amp; Kamireddy, S. (2019, September). A Markovian method for predicting trust behavior in human-agent interaction. In Proceedings of the 7th International Conference on Human-Agent Interaction (pp. 171?178). Kyoto, Japan.
Quinn, D. B. (2018). Exploring the efficacy of social trust repair in human-automation interactions (Unpublished doctoral dissertation). Clemson University, Lawton.
Ragni, M., Rudenko, A., Kuhnert, B., amp; Arras, K. O. (2016, August). Errare humanum est: Erroneous robots in human- robot interaction. In 2016 25th IEEE International symposium on robot and human interactive communication (RO-MAN) (pp. 501?506). New York, NY, USA.
Rempel, J. K., Holmes, J. G., amp; Zanna, M. P. (1985). Trust in close relationships. Journal of Personality and Social Psychology, 49(1), 95?112.
Robinette, P., Howard, A. M., amp; Wagner, A. R. (2015, October). Timing is key for robot trust repair. In Social Robotics: 7th International Conference, ICSR. Paris, France.
Robinette, P., Howard, A. M., amp; Wagner, A. R. (2017a). Conceptualizing overtrust in robots: Why do people trust a robot that previously failed?. In Lawless, W., Mittu, R., Sofge, D., amp; Russell, S (Eds), Autonomy and artificial intelligence: A threat or savior? (pp. 129?155). Springer, Cham.
Robinette, P., Howard, A. M., amp; Wagner, A. R. (2017b). Effect of robot performance on human-robot trust in time-critical situations. IEEE Transactions on Human- Machine Systems, 47(4), 425?436.
Robinette, P., Li, W., Allen, R., Howard, A. M., amp; Wagner, A. R. (2016, March). Overtrust of robots in emergency evacuation scenarios. In 2016 11th ACM/IEEE international conference on human-robot interaction (HRI) (pp. 101?108). Christchurch, New Zealand.
Rossi, A., Dautenhahn, K., Koay, K. L., amp; Walters, M. L. (2017, November). Human perceptions of the severity of domestic robot errors. In Social Robotics: 9th International Conference (ICSR) (pp. 647?656).Tsukuba, Japan.
Salem, M., Eyssel, F., Rohlfing, K., Kopp, S., amp; Joublin, F. (2013). To err is human (-like): Effects of robot gesture on perceived anthropomorphism and likability. International Journal of Social Robotics, 5, 313?323.
Sanders, T. L., Kaplan, A., Koch, R., Schwartz, M., amp; Hancock, P. A. (2019). The relationship between trust and use choice in human-robot interaction. Human Factors, 61(4), 614?626.
Sanders, T. L., MacArthur, K., Volante, W., Hancock, G., MacGillivray, T., Shugars, W., amp; Hancock, P. A. (2017, September). Trust and prior experience in human-robot interaction. In Proceedings of the human factors and ergonomics society annual meeting (pp. 1809?1813). Sage CA: Los Angeles, CA.
Sarkar, S., Araiza-Illan, D., amp; Eder, K. (2017). Effects of faults, experience, and personality on trust in a robot co-worker. arXiv preprint, arXiv:1703.02335.
Sebo, S. S., Krishnamurthi, P., amp; Scassellati, B. (2019, March). “I don't believe you”: Investigating the effects of robot trust violation and repair. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 57?65). Daegu, Korea (South).
Seong, Y., amp; Bisantz, A. M. (2008). The impact of cognitive feedback on judgment performance and trust with decision aids. International Journal of Industrial Ergonomics, 38(7-8), 608?625.
Shank, D. B., Bowen, M., Burns, A., amp; Dew, M. (2021). Humans are perceived as better, but weaker, than artificial intelligence: A comparison of affective impressions of humans, AIs, and computer systems in roles on teams. Computers in Human Behavior Reports, 3, 100092.
Shi, Y., Azzolin, N., Picardi, A., Zhu, T., Bordegoni, M., amp; Caruso, G. (2020). A Virtual reality-based platform to validate HMI design for increasing user’s trust in autonomous vehicle. Computer-Aided Design and Applications, 18(3), 502?518.
Shin, D., Zaid, B., amp; Ibahrine, M. (2020, November). Algorithm appreciation: Algorithmic performance, developmental processes, and user interactions. In 2020 International Conference on Communications, Computing, Cybersecurity, and Informatics (CCCI) (pp. 1?5). Sharjah, United Arab Emirates.
Short, E., Hart, J., Vu, M., amp; Scassellati, B. (2010, March). No fair! An interaction with a cheating robot. In 2010 5th ACM/IEEE international conference on human-robot interaction (HRI) (pp. 219?226). Osaka, Japan.
Song, Y., amp; Luximon, Y. (2020). Trust in AI agent: A systematic review of facial anthropomorphic trustworthiness for social robot design. Sensors, 20(18), 5087.
Sweller, J. (2011). Cognitive load theory. Psychology of Learning and Motivation, 55, 37?76. https://doi.org/10. 1016/B978-0-12-387691-1.00002-8
Tam, K. Y., amp; Ho, S. Y. (2005). Web personalization as a persuasion strategy: An elaboration likelihood model perspective. Information Systems Research, 16(3), 271?291.
Toader, D. C., Boca, G., Toader, R., M?celaru, M., Toader, C., Ighian, D., amp; R?dulescu, A. T. (2019). The effect of social presence and chatbot errors on trust. Sustainability, 12(1), 256.
Ullman, D., amp; Malle, B. F. (2017, March). Human-robot trust: Just a button press away. In Proceedings of the companion of the 2017 ACM/IEEE international conference on human-robot interaction (pp. 309?310). Vienna, Austria.
van Maris, A., Lehmann, H., Natale, L., amp; Grzyb, B. (2017, March). The influence of a robot’ s embodiment on trust: A longitudinal study. In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on human-robot interaction (pp. 313?314). Vienna, Austria.
van Pinxteren, M. M., Wetzels, R. W., Rüger, J., Pluymaekers, M., amp; Wetzels, M. (2019). Trust in humanoid robots: Implications for services marketing. Journal of Services Marketing, 33(4), 507?518.
Volante, W. G., Sosna, J., Kessler, T., Sanders, T., amp; Hancock, P. A. (2019). Social conformity effects on trust in simulation-based human-robot interaction. Human Factors, 61(5), 805?815.
Wagner, A. R., Borenstein, J., amp; Howard, A. (2018). Overtrust in the robotic age. Communications of the ACM, 61(9), 22?24.
Walker, F., Wang, J., Martens, M. H., amp; Verwey, W. B. (2019). Gaze behaviour and electrodermal activity: Objective measures of drivers’ trust in automated vehicles. Transportation Research part F: Traffic Psychology and Behaviour, 64, 401?412.
Wang, N., Pynadath, D. V., Rovira, E., Barnes, M. J., amp; Hill, S. G. (2018). Is it my looks? Or something I said? The impact of explanations, embodiment, and expectations on trust and performance in human-robot teams. In Ham, J., Karapanos, E., Morita, P., amp; Burns, C (Eds), Persuasive Technology (pp. 56?69). Springer, Cham.
Washburn, A., Adeleye, A., An, T., amp; Riek, L. D. (2020). Robot errors in proximate HRI: How functionality framing affects perceived reliability and trust. ACM Transactions on Human-Robot Interaction (THRI), 9(3), 1?21.
Wickens, C. D. (1995). Designing for situation awareness and trust in automation. IFAC Proceedings Volumes, 28(23), 365?370.
Wullenkord, R., Fraune, M. R., Eyssel, F., amp; ?abanovi?, S. (2016, August). Getting in touch: How imagined, actual, and physical contact affect evaluations of robots. In 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (pp. 980?985). New York, USA.
Xu, J., de’Aira, G. B., amp; Howard, A. (2018, August). Would you trust a robot therapist? Validating the equivalency of trust in human-robot healthcare scenarios. In 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (pp. 442?447). Nanjing, China.
Xu, J., amp; Howard, A. (2018, August). The impact of first impressions on human-robot trust during problem-solving scenarios. In 2018 27th IEEE international symposium on robot and human interactive communication (RO-MAN) (pp. 435?441). Nanjing, China.
Xu, J., amp; Montague, E. (2013, September). Group polarization of trust in technology. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (pp. 344?348). Sage CA: Los Angeles, CA.
Yen, C., amp; Chiang, M. C. (2021). Trust me, if you can: A study on the factors that influence consumers’ purchase intention triggered by chatbots based on brain image evidence and self-reported assessments. Behaviour amp; Information Technology, 40(11), 1177?1194.
Trust dampening and trust promoting: A dual-pathway of trust calibration
in human-robot interaction
HUANG Xinyu, LI Ye
(School of Psychology, Central China Normal University amp; Key Laboratory of Adolescent
Cyberpsychology and Behavior, Ministry of Education, Wuhan 430079, China)
Abstract: Trust is the basis of successful human-robot interaction. However, humans do not always hold the appropriate level of trust in human-robot interaction, sometimes they may also fall into pitfalls: the trust bias, which contains both over-trust and under-trust. Trust bias can harm the human-robot interaction and so trust calibration is necessary. Trust calibration is often achieved through two ways: trust dampening and trust promoting. Trust dampening focuses on how to reduce the high level of trust in robots, while trust promoting focuses on how to improve the low level of trust in robots. For future directions, we suggest further optimize the measurement of methods. Besides, we also need to clarify the cognitive process and explore more boundary conditions. Finally, in order to boost human-robot collaboration, researchers are encouraged to explore personalized and specialized trust calibration strategies based on individual differences and further clarify the various reasons why trust bias occurs.
Keywords: trust calibration, trust bias, trust dampening, trust promoting, human-robot interaction