The Role Of Perceived Gratifications On Virtual Conversational Assistants’ Privacy Concerns

Arezoo Fakhimi, Tony Garry, Sergio Biggemann

Research output: Unpublished contribution to conferenceAbstractpeer-review


Natural language processing and deep machine learning enable VCAs to understand, process, and respond to users’ utterances in real-time. Users can talk with VCAs in a human-like way and VCAs are able to engage in dialogue with them. This procedure along with all the benefits that users realise are also associate to a potential risk for users of being heard by VCAs and therefore this erodes trust. Nevertheless, VCAs usage is increasing worldwide. Bearing in mind that users are not naïve about privacy issues, this research aims to investigate why people are willing to make themselves vulnerable by using VCAs. We have conducted 31 in-depth interviews with users of Siri, Alexa, and Google Assistants in 5 countries, which illustrate that anthropomorphic features make users perceive gratifications in different form, compared with the interactions experienced with previous machines.
These perceived gratifications cause that users ignore privacy risks and uncertainty, allowing for building trust between humans and machines.
Original languageEnglish
Number of pages1
Publication statusUnpublished - 5 Dec 2022
EventAustralian and New Zealand Marketing Academy - Perth, Australia, Perth, Australia
Duration: 5 Dec 20227 Dec 2022
Conference number: 2023


ConferenceAustralian and New Zealand Marketing Academy
Abbreviated titleANZMAC
Internet address


Dive into the research topics of 'The Role Of Perceived Gratifications On Virtual Conversational Assistants’ Privacy Concerns'. Together they form a unique fingerprint.

Cite this