While technology is the product of human ingenuity, it has deeply affected our lives. The introduction of artificial intelligence (AI) has intertwined our existence with it, and this dynamic relationship brings both benefits and challenges. In the past decade, AI has moved into professional domains, such as law enforcement, healthcare, and civil services. For example, Japan has a steady rise of the adoption of robots in nursing home (Mori, 2023). China is currently developing a simulacrum of hospital called “Agent Hospital” in which patients, nurses, and doctors are autonomous agents powered by large language models (Li et al., 2024). Dubai launched a robot police officer patrolling malls and tourist attractions in 2017 (BBC News, 2017). The New Zealand Government just announced a project to develop a conversational companion called Gov-GPT as “a digital front-door where Kiwis can quickly and easily find answers to their questions about Government support and services” (Pryor, 2024).
Despite some of these AI technologies still being in development, fundamental questions arise from the service receivers’ perspective as AI gradually assumes concrete roles in these professional fields. The overarching question is: Will humans, as service receivers, willingly cede control to AI technology? This question can be broken down into several sub-questions: What are public attitudes toward AI robots in different settings? For example, are people comfortable with AI police patrolling while they shop? Do people trust AI doctors to plan their medical treatment? And, do we inherently perceive AI robots as superior to humans, and if so, what factors contribute to this perception?
This study utilizes Hofstede’s (1985) concept of power distance to examine the dynamics between service providers (e.g., physicians) and service receivers (e.g., patients), drawing parallels with the emerging interaction between robots and humans. Hofstede (1985, p. 347) defined power distance as " the degree to which the less powerful members of a society accept and expect that power is distributed unequally." Societies with a higher power distance culture exhibit a stronger awareness and acceptance of unequal power distribution. In such societies, a distinct hierarchical structure separates the upper and lower classes, fostering a natural tendency for those in lower positions to submit to the status and authority of their superiors, with minimal questioning or resistance. Conversely, low power distance cultures encourage a more egalitarian approach, with people questioning and challenging authority figures (Shah et al., 2015). This framework is also relevant in the human-computer realm. People with a high human-computer power distance readily concede superiority to technology and accept the efficiency and accuracy of computers as unquestionable. Conversely, those with a low power distance approach tend to take a more critical view of computers, recognizing their strengths and limitations without blind deference.
In high power distance cultures, people often accept the power imbalance because they perceive the superior as having more knowledge, experience, or resources. This lack of understanding creates a “black box” effect, where the superior’s decision-making process remains opaque. We hypothesize that in settings characterized by high power distance, service receivers will be less inclined to accept an AI robot as the provider because replacing human with an AI system could intensify this lack of understanding. AI algorithms, especially complex ones like deep learning models, can be difficult to interpret even for experts. This could increase the sense of the “black box,” making people feel even more uncertain and less in control. By examining public acceptance of AI robots across varying power distance settings and identifying contributing factors, this study aims to propose strategies for enhancing trust in AI robots in practical applications, ultimately benefiting human society through AI technology development.
Literature Review
Human-Computer Interaction
AI’s “black box” nature often leads to a perception of its superiority, creating a “technological power distance,” which refers to the inherent belief that AI is superior to humans in efficiency and effectiveness. However, this belief does not necessarily translate into acceptance of AI products. Risk tolerance towards AI remains a critical factor in adoption.
Studies show contrasting perspectives about the relationship between uncertainty avoidance and technology acceptance. Some studies show that individuals in cultures characterized by uncertainty avoidance tend to conform to established norms for minimizing potential ambiguity. This inclination may lead to a decreased willingness to accept new technology. For example, Kim and Kim (2021) found that individuals with higher uncertainty avoidance scores were less likely to accept news written by robotic journalists. Similarly, Zhang et al. (2022) observed that high uncertainty avoidance hinders the adoption of wearable medical devices. Jan et al. (2022) emphasized the importance of reducing the perceived uncertainty surrounding new technologies to facilitate their widespread acceptance. This resonates with Özbilen’s (2017) cross-cultural study, which revealed a correlation between high uncertainty avoidance and lower technology adoption rates. However, McCoy (2002) offers a contrasting perspective, suggesting that new technologies can reduce uncertainty by providing more information, streamlining communication, and enhancing control and convenience. This proposition suggests that individuals with a high propensity for uncertainty avoidance might be more receptive to adopting technologies that effectively alleviate anxiety. McCoy’s study was conducted before AI technology became widespread and AI robots became integrated into daily life. Therefore, the relationship between risk awareness and technology acceptance explored in this study may not apply to the current era of AI.
Risk perception varies across different scenarios. In high power distance cultures, individuals may feel more vulnerable or at risk due to the perceived imbalance of power in interactions with authority figures. For instance, applying for a document at a government building is generally considered less risky than undergoing medical treatment. When AI robots are introduced to replace service providers in these settings, people’s acceptance might differ due to varying levels of perceived risk. This concept will be explored further in a later section.
In addition to perceived risk, many elements influence the human-computer interaction, including personality, expectations, perceptions, evaluations, and technological familiarity on the human side, and factors such as design, function, transparency, interpretability, reliability, and adaptability on the machine side. This study focuses specifically on the “human” aspect, concentrating on individuals’ social value perception and technological familiarity. The intuitive nature of the link between perceived social values and technological evaluation is apparent. When individuals believe that technology enhances human well-being, it will be valued more highly. However, the relationship between technological knowledge and valuation is more nuanced. While increased familiarity with technology’s functionalities often fosters confidence and a willingness to accept its superiority, it can also lead to skepticism and a desire to avoid excessive dependence (Lewis, 2017).
Power Distance Across Societal Domains: Government, Health Care and Law Enforcement
To investigate the relation between power distance and technology acceptance, this study selects three professional fields for this study: civil services, healthcare, and law enforcement. The rationale guiding the selection are as follows: First, these sectors represent fields where ordinary citizens regularly interact with professionals—civil servants, physicians, and police officers—making the robot scenario easier to imagine for survey respondents. Second, the power distance varies across these professional domains, influenced by both the expertise required and the complexities of the problems encountered. In situations that require specialized knowledge, individuals tend to defer to experts, particularly when faced with complex problems. Notably, the medical sector presents a unique case due to its inherent complexity and the mysterious nature of disease, which can be challenging for lay people to understand. In contrast, the rules governing civil servants and police officers are generally more transparent in democracies, although their inherent power dynamics differ significantly based on their specific duties. This variation in power distances allows for an examination of whether technology acceptance also varies accordingly. Exploring these subtle differences across domains and how they relate to public acceptance of their robotic counterparts is a crucial step in maximizing the benefits of cutting-edge AI technology. The following explores the application of power distance concept in the fields of civil services, law enforcement, and healthcare.
The Government-Public Power Distance
Cross-national comparative studies show that democracies tend to have lower levels of power distance between the government and people compared to non-democracies. Terzi (2011) explored the relationship between power distance and political tendencies (authoritarian vs. democratic) by using the Power Distance Scale and Democratic Tendency Scale for data collection. The results of the study showed a significant positive relationship between power distance and authoritarian tendencies and a significant negative relationship between power distance and democratic tendencies. In countries with high power distance, Mu et al. (2016) found that the government tends to use information strategically and manipulates data analysis based on their comparative study between China and the West. Shaheer et al. (2019) obtained similar results regarding the relationship between promoting democracy and reducing the power gap. Their corruption study, which examined 30,249 state-owned and private enterprises in 50 countries, found that state-owned enterprises are more prone to managerial rent-seeking behaviors in deteriorating institutional environments, often leading to bribery and corruption. Based on these findings, they argue that promoting democracy, strengthening the rule of law, and narrowing the power gap are crucial for curbing bribery in state-owned enterprises. Salehan et al. (2018), based on technological determinism, analyzed the relationship between technology, social structure, and cultural values and found that technology is an important driver of cultural homogeneity, which leads to a higher degree of individualism and a lower degree of power distance. Their study therefore concluded that technology is useful in promoting democracy because it reduces power distance.
The previous findings clearly demonstrate the connection between reducing power distance and promoting democracy. In contrast to democratic nations, governments in non-democratic countries are more likely to manipulate information and data analysis and operate with less transparency in their administrative and political processes. Citizens in such countries lack the right to monitor their government’s institutional design and policy operation, which can lead to significant disparities in power between the public and the ruling elite and foster widespread corruption. This unequal power dynamic is often unconsciously accepted by the public, as highlighted by research showing a negative correlation between the level of democracy and power distance: societies with higher levels of democracy tend to exhibit lower power distances. Increased democratization generally leads to more transparent government decision-making, thereby empowering citizens to monitor their government and reducing the power gap between them.
The Power Distance in Police Law Enforcement
While the police department is part of the government, the nature of police work differs significantly from that of most civil servants. The police are entrusted by the state with the power to deal with the public’s problems related to crime, conflict, violence, and emergencies. Because of the urgency and risk involved in most of the tasks the police perform, they need to be armed and use weapons in a lawful manner when necessary. In Taiwan, historical and cultural factors have fostered a strong image of the police as enforcers of order and discipline. This perception, often instilled in children through parental warnings such as “Don’t misbehave or the police will arrest you,” remains a common tool for discouraging unwanted behavior in young children.
The power distance between the police and the public is further reinforced by the police uniform, which distinguishes them from general civil servants. A study on public attitudes towards the police revealed that the sense of authority conveyed by uniforms has always been crucial to the police role. The uniform symbolizes authority and legitimacy, so the combination of police attitudes and uniforms can create a sense of pressure or intimidation, influencing public attitudes and behavior (Bell, 1982). Another study on public perceptions of police officers found that the same individual was perceived as more competent, trustworthy, intelligent, and helpful when wearing a uniform compared to civilian clothes (Singer & Singer, 2012). In summary, the unique nature of police work establishes a higher power distance between the police and the public, compared to general civil servants. This distance is further amplified by the police uniform, a symbol of authority that can elicit feelings of pressure or intimidation, ultimately shaping public attitudes and behavior.
The Power Distance in Healthcare
Research has consistently revealed a significant power distance between physicians and patients, rooted in cultural norms and expectations. A study conducted in a secondary mental healthcare facility in Taiwan found that patients generally felt they were expected to defer to the physician’s professional authority. They perceived their role as simply stating their symptoms and trusting the physician’s expertise to guide the rest of their treatment. In the physician-patient relationship, patients see themselves as recipients, not providers, and they deeply believe in this physician-patient relationship because they have been taught to do so since childhood (Lin et al., 2020). Lambert (1996) and Harris (2003) have pointed out that the degree of politeness can help us better understand the power and social distance between healthcare professionals and patients. Yin et al. (2012) found that pediatricians were less courteous than patients and patients’ parents. Their study examined courtesy attitudes between pediatricians, patients, and patients’ parents during consultations and found that the emphasis on efficiency in pediatric clinics, where physicians often take the lead in communication, contributes to an asymmetry of power between physicians and patients. In general, people are quite deferential to physicians, particularly in cultures where respect for experts is deeply ingrained (Yin et al., 2012). In Taiwanese culture, public respect for the expertise of physicians, lawyers, and teachers gives them professional power (Zheng, 2010). This deeply ingrained cultural deference towards experts, particularly physicians, creates a substantial power imbalance in the healthcare setting, where patients are often hesitant to question or challenge their doctors.
Summary
In democracies, the power gap between the public and the government is smaller than in non-democratic nations. This is because democratic governments tend to be more transparent and have clearer administrative procedures. As a result, most everyday interactions with public authorities, such as applying for documents, follow established rules and regulations, leaving minimal room for individual discretion. This transparency and adherence to the law foster a sense of fairness and reduce the perception of uncertainties and an uneven power dynamic. The visual cues associated with police presence, such as the visible display of weapons and the symbolic authority conveyed by their uniforms, can contribute to a widening of the power gap between the police and the public. These symbols of authority can potentially reinforce perceptions of power disparity. As for the medical realm, the society places a high level of prestige upon the medical profession. The inherent complexity and specialized knowledge associated with healthcare creates a significant power differential in the physician-patient relationship. This high threshold of medical expertise places the physician in a position of considerable authority and creates a high physician-patient power distance.
Power Distance, Risks Aversion, and Attitude Toward AI
Individuals’ acceptance of AI robots is likely influenced by risk perception, which varies across power distance settings. In high power distance settings, where individuals may feel uncertain, vulnerable, and lacking control, the “black box” nature of AI can amplify risk perception and trigger aversion. This may lead to greater reluctance to embrace AI robots. Conversely, in low power distance settings, with greater transparency and reduced power imbalance, acceptance of AI is expected to be higher. In addition to risk perception and power distance, factors such as individuals’ familiarity with technology and their perception of its social value may also influence their acceptance of AI robots. This study will examine how these factors interact to shape public attitudes toward AI in various professional settings. Based on this, we propose three hypotheses:
H1: Public acceptance of AI robots is negatively associated with power distance. This association will be examined by assessing whether acceptance varies based on the power distance between service providers and receivers in civil service, police law enforcement, and healthcare.
H2: People’s perceived social value of new technology is positively associated with their acceptance of AI robots.
H3: People’s familiarity with new technology is positively associated with their acceptance of AI robots.
Research Methodology
Questionnaire
This study employed a nationwide questionnaire survey to gather data on public perceptions of AI in the fields of civil service, health care, and law enforcement in Taiwan. Given the potential ambiguity associated with the term “AI” and the increasing presence of robots in various public spaces, the questionnaire utilized the word “robot” to provide respondents with a more concrete and relatable object of reference. This approach aimed to reduce potential ambiguity and misinterpretations, enhance the validity of responses, and ensure that participants were focusing on the tangible and visible manifestations of AI technology that they are most likely to encounter in their daily lives. By using the term robot, we also aimed to avoid technical jargon and tap into the everyday relevance of AI as experienced through robotic systems, further facilitating clearer understanding and more focused responses. However, this study by no means intends to imply that robots will replace human civil servants, physicians, or police officers.
The questionnaire employed a 5-point Likert scale for data collection. The following three questionnaire items were designed to assess public acceptance of AI robots across different settings:
-
To what extent do you agree with the statement “If robots could be physicians, they would provide better medical services than human physicians”? (R-phy)
-
To what extent do you agree with the statement “If robots could be civil servants, they would provide better civil services than human civil servants”? (R-ser)
-
To what extent do you agree with the statement “If robots could be police officers, they would make people feel safer than human police officers”? (R-pol)
Three questionnaire items were designed to assess perceived social value of technology:
-
To what extent do you agree with the statement “Tech companies like Google and Microsoft are making investments in Taiwan’s Internet infrastructure, facilitating the country’s integration into the global community”? (P1-visibility)
-
To what extent do you agree with the statement “The more advanced the technology is, the more people can express their views freely”? (P2-freedom)
-
To what extent do you agree with the statement “The more advanced the technology is, the more equitable opportunity created for the disadvantaged groups (e.g. those with lower income or disability)”? (P3-justice)
Three questionnaire items were used to gauge the public’s familiarity with technology usage:
-
To what extent do you agree with the statement “I frequently use social media, such as Facebook, to share my perspectives or personal experiences”? (F1-share)
-
To what extent do you agree with the statement “I frequently use social media, such as Facebook, IG, or TikTok to connect with others”? (F2-connect)
-
To what extent do you agree with the statement “I frequently use search engines, such as Google, or navigational tool, such as Google Maps, to help me make judgments”? (F3-judgement)
To ensure the clarity and effectiveness of the survey instrument, a pre-test was conducted with a small group of colleagues with professional backgrounds in psychology, labor, media, political science, and social welfare who are familiar with the research area. Their feedback informed minor revisions to wording and item order, improving the instrument’s understandability. Additionally, to ensure content validity, the survey design benefitted from a review of relevant literature on human-machine interaction and AI social robotics and discussions with experts in survey methodology. These discussions focused particularly on the challenges of accurately capturing participant understanding of “robot” concepts, as identified in the literature and confirmed through our pre-test. This combined approach helped refine the instrument to ensure the items adequately reflected the intended constructs and minimized potential misunderstandings related to robot terminology.
The internal consistency of the questionnaire was assessed using Cronbach’s Alpha, yielding a coefficient of .698 for the entire instrument (N = 1282). The internal consistency of the scales was as follows: acceptance of AI robots (α = .608), familiarity with technology (α = .595), and perceived social value of technology (α = .517). While the overall instrument demonstrates acceptable reliability, the lower alpha values for these specific scales suggest potential limitations in their measurement. The potential reasons for the lower alpha values could be due to the heterogeneity of the sample or the specific wording of the items. The calculated weighted standard deviation of 16.83 indicates a fair degree of age variability within the sample. The presence of respondents across a wide age range, from 18 to older than 70 (see Table 1), contributes to this heterogeneity. This age diversity could potentially influence how participants respond to the questionnaire, especially regarding topics where age-related differences in perspectives or experiences are expected. These findings suggest that further development and refinement of these specific scales might be necessary in future research to improve their internal consistency. More details will be discussed in the research limitations section. Nevertheless, the current study provides valuable insights into public perceptions of AI robots, and the data collected can inform future research and guide the development of more robust measurement tools.
Case Selection
This study is conducted in Taiwan. Taiwan’s proactive approach to AI development, coupled with its unique cultural context and robust data availability, makes it an ideal setting for this study. The government’s AI initiatives, the AI Taiwan Action Plan, and progressive regulatory framework, such as the Unmanned Vehicles Technology Innovative Experimentation Act,[1] foster a technologically advanced environment (Executive Yuan, 2019; You, 2024). Taiwan’s blend of Eastern and Western influences offers a unique perspective on the interplay between power distance and technology acceptance, contributing to a global understanding of AI adoption. Additionally, the availability of reliable demographic data and the feasibility of conducting a nationwide survey enabled the collection of representative data on public perceptions. By focusing on Taiwan, this study aims to provide insights into the factors influencing public acceptance of AI robots in a technologically advanced society, informing responsible AI development and deployment strategies both locally and globally.
Sampling and Survey Implementation
A telephone survey was conducted from January 9 to January 31, 2023, targeting adults aged 18 or above residing in 20 counties and cities across Taiwan. Both landline and mobile phones were used to reach respondents. The statistical population for the survey was defined based on the demographic data provided by the Department of Household Registration, Ministry of the Interior, as of December 2022. Stratified proportional sampling was employed, where the number of valid samples allocated to each stratum is determined by the proportion of the population aged 18 or above in each county and city relative to the total population of this age group in Taiwan.
The landline telephone survey employed the Computer Assisted Marketing Interviewing (CAMI) System from Yuhma Technology Company, a leading survey system provider in Taiwan. The master sample list comprised the domestic telephone directory from Chunghwa Telecom, the largest phone company in Taiwan. Random Digit Dialing (RDD) was used in two stages. Initially, telephone numbers were randomly selected from the master list. Subsequently, “last 2-digit random dialing” finalized the sample telephone numbers, ensuring coverage of individuals not listed in the directory. To guarantee that each eligible resident within a household had an equal chance of selection, household random sampling was implemented. However, if the selected respondent was unavailable, a randomized respondent substitution method was used. Regarding the mobile phone survey, the master sample list was established in two steps. Mobile phone numbers in Taiwan consist of 10 digits. The first five digits were randomly selected from the “Current Status of Mobile Internet Service Subscriber Number Allocation,” periodically updated by the National Communications Commission (NCC). The last five digits were then randomly generated.
A total of 1,074 valid samples were successfully interviewed for the landline survey and 208 for the mobile phone survey, yielding a combined sample size of 1,282. The demographic characteristics of the sample were examined in terms of gender, age, and place of domicile. Chi-square tests revealed no statistically significant differences between the sample and the target population for any of these variables (gender: χ² = 0.000, p = 1.000; age: χ² = 2.930, p = .992; place of domicile: χ² = 0.795, p = 1.000). The demographic composition of the sample is presented in Table 1.
Data Processing
This study employed a five-point Likert scale for questionnaire items. The answer options 1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, and 5 = strongly agree were used to gauge respondents’ level of agreement. Higher scores indicate stronger agreement with statements, suggesting the superiority of AI robots over human counterparts, implying a higher acceptance of AI robots. The neutral option was interpreted as the respondent having no strong opinion on the given statement.
To comprehensively assess public acceptance of the three categories of robots – robotic physicians, civil servants, and police officers – a composite index was constructed by aggregating the scores of individual questionnaire items pertaining to each type. This integrated index falls on a scale from 3, signifying the most unfavorable attitude, to 15, representing the most favorable attitude.
Research Results
Demographic Analysis of Overall Attitude Towards Robots
An independent-samples t-test was conducted to examine potential gender disparities in acceptance of AI robots, as measured by the integrated attitude index (Table 2). The results revealed a statistically significant difference between males (M = 8.2, SD = 3.1) and females (M = 7.5, SD = 2.7), with a t-value of 4.49 (p < .001). This suggests that females, on average, held slightly more negative attitudes towards robotic physicians, civil servants, and police officers compared to males. However, it is crucial to note that the effect size is relatively small, and further investigation is warranted to explore the underlying factors contributing to this observed gender difference.
To investigate variations in acceptance of AI robots based on occupation, respondents were categorized into two groups: military, public, and teaching personnel (MPTP) and non-MPTP. An independent-samples t-test revealed a statistically significant difference between these groups in their attitudes towards robots, as measured by the integrated attitude index (Table 3). MPTP respondents (M = 6.3, SD = 2.9) exhibited a more negative stance compared to non-MPTP group (M = 7.9, SD = 2.9), with a t-value of -4.09 (p < .001).
An independent-samples t-test, using high school education as the cut-off, revealed no significant difference in overall attitudes towards robots between respondents with varying educational levels. This suggests that individuals’ educational background does not necessarily influence their general perception of robots.
Similarly, no significant difference in overall robot attitudes was observed between different age groups based on a t-test with a cut-off of 50 years old. However, the analysis did reveal a statistically significant difference (t = 4.80, p < .001) in attitudes towards robotic civil servants specifically. Although both groups took a negative stance, respondents below 50 years old (M = 3.2, SD = 1.4) exhibited a slightly more positive attitude compared to those above 50 years old (M = 2.8, SD = 1.3) (Table 4).
Human-Computer Power Distance: Human’s Evaluation of Robots
Prior studies suggest that democracies generally exhibit lower power distance between the public and the government compared to non-democracies. However, even within democratic public sectors, different types of government officials may hold varying levels of power distance relative to the public. For example, police officers, due to their law enforcement and coercive capabilities, often occupy a higher perceived power position compared to general civil servants. This differential power dynamic raises the following question: Can the existing power distance between human police and the public be directly translated to the relationship between robotic police and the public? The further question is: Do people perceive robotic civil servants and police officers differently in terms of their superiority over humans? These questions are what H1 aims to address.
This study tests H1 by investigating whether public acceptance of AI robots differs across professional fields. To evaluate the respondents’ overall attitudes towards AI robots, the subtle difference between strongly agree and agree on the positive end, or strongly disagree and disagree on the negative end, were less important. Therefore, for testing H1, the Likert scale were consolidated into a binary variable capturing positive or negative sentiment. Responses of strongly agree and agree were recoded as agree (positive attitude) and assigned a code of 2. Responses of strongly disagree and disagree were recoded as disagree (negative attitude) and assigned a code of 1. The neutral stance including don’t know/no opinion (originally coded as 3) and refuse to answer (originally coded as missing value) were all coded as missing value. This recoding process facilitated data analysis while retaining the directional information inherent in the original responses.
Empirical findings revealed a significant difference in public acceptance of robotic civil servants compared to robotic police officers. As illustrated in Table 6, public attitudes towards robotic civil servants were mixed, with nearly half (45.6%) expressing optimism about their service performance compared to human counterparts. However, a similar proportion (50.1%) showed skeptical attitude, and a small percentage (4.4%) remained undecided.
In contrast, as shown in Table 5, public attitudes towards robotic police officers were markedly negative. Only 28.7% expressed positive acceptance, while a majority (66.5%) expressed negative views. These results suggest that most respondents do not feel safer around robotic police officers. The high power distance observed between human police officers and the public does not seem to apply to interactions with robotic police. Most people hold a skeptical attitude towards robotic police officers and do not inherently perceive them as superior to their human counterparts.
Understanding public attitudes toward robotic physicians is crucial as this technology advances. As Table 5 shows, a majority (64.9%) expressed skepticism about robotic physicians providing superior medical services compared to humans, while only 30.0% held an optimistic view. These findings suggest a prevailing conservative stance among the public regarding robotic physicians. Interestingly, the existing power distance between human physicians and patients does not seem to transfer to the robotic scenario.
To better interpret the testing results of H1, we compared public acceptance of AI robots in civil services to their acceptance in law enforcement and healthcare settings. As the concept of power distance is abstract, we used the terms “comparably low” and “comparably high” to describe the relative power distance in these scenarios. Table 6 shows the comparison between civil service and law enforcement settings. A synthesis of prior research suggests a higher power distance between the public and police officers compared to civil servants, indicating that people are generally more deferential to the authority of police officers. However, when it comes to the robotic scenario, our data shows a very low acceptance rate (28.7%) of robotic police officers compared to a 45.6% acceptance rate for robotic civil servants. This suggests a comparably lower power distance in the law enforcement setting, as most people do not accept the idea of robotic superiority in law enforcement. Conversely, the higher acceptance rate for robotic civil servants indicates a comparably higher power distance. A similar result was observed when comparing the civil service and healthcare settings. As shown in Table 6, the comparably higher power distance between patients and human physicians does not translate to the robotic scenario. Most people reject the idea of robotic superiority over human physicians. Based on the findings presented above, H1 is supported. The acceptance of AI robots varies in different power distance settings.
Factors Associated with Human-Computer Power Distance
To test H2 and H3, we examined the connection between acceptance of AI robots and two factors: perceived social value of technology and familiarity with technology. We hypothesize that higher perceived social value and greater familiarity with technology will be associated with greater acceptance of AI robots. An independent-samples t-test was used to investigate whether there are significant differences in the general acceptance of robots between groups with high vs. low perceived social value of technology and between groups with high vs. low technology familiarity.
Perceived Social Value of Technology and Attitude Towards Robots
This section explores the potential multifaceted perceptions of technology’s social values by assessing three distinct areas: enhancing Taiwan’s global visibility, empowering freedom of expression, and fostering social justice. The Likert scale responses were consolidated into a binary variable capturing positive or negative sentiment. Responses of strongly agree and agree were grouped as positive, while responses of strongly disagree and disagree were grouped as negative. Neutral stances and refuse to answer responses were treated as missing values.
Table 7 reveals a statistically significant association (t = -6.49, p < .001) between the perceived ability of international technology companies to enhance Taiwan’s global visibility (P1-visibility) and overall attitudes towards robots. Based on the score range from 3 (least favorable) to 15 (most favorable), individuals holding positive views towards technology companies exhibited a more favorable stance towards robots, with a mean score of 8.3 compared to 7.2 for those with negative perceptions. This suggests a potential link between public trust in technology companies and broader acceptance of technological advancements like robots. However, it is important to note that both mean scores are less than the mid-point of 9, suggesting a generally conservative attitude towards robots. This trend of mean scores less than the mid-point, indicating a conservative stance, is also observed in Tables 9 and 10. Accordingly, H2 is supported.
Table 8 revealed a statistically significant association (t = -6.42, p < .001) between perceived technology’s capability of empowering freedom of expression and overall attitudes towards robots. Individuals holding a positive view of technology’s ability to enhance freedom of expression presented a more favorable attitude towards robots (M = 8.3, SD = 3.0) compared to those who holding negative view (M = 7.2, SD = 2.8). This finding suggests that individuals with a stronger belief in technology’s potential to enhance freedom of expression are also more likely to hold favorable views towards the integration of robots into society.
Table 9 shows a significant association (t = -5.49, p < .001) between perception of technology’s ability to help people achieve social justice and attitude towards robots. Individuals holding positive views of technology ability to enhance social justice showed a more favorable attitude towards robots (M = 8.4, SD = 3.0) compared to those holding negative views (M = 7.5, SD = 2.9). This means that the more positive a person’s attitude towards technology’s power to enhance social justice, the more positive their attitude towards robots is.
Technology Familiarity and Public Attitudes Towards Robots
While previous research has yielded inconsistent findings regarding the link between technology familiarity and acceptance, the t-test did not identify a significant association between these variables in relation to attitudes towards robots. Specifically, the questionnaire responses revealed no difference in overall robot acceptance based on familiarity with social media or everyday AI tools like Google Maps. Thus, H3 is not supported by this test. The following section will further test H2 and H3 through correlation and regression analyses.
Factors Predicting Robot Acceptance
To construct a regression model, we conducted a principal factor analysis (PCA) on all Likert scale items in the questionnaire. Four primary factors emerged from the analysis: robot acceptance (RA), familiarity with technology usage (FT), perceived social value of technology (SV), and questionable tech impact (QT). Items with a factor loading value below 0.5 were excluded to enhance the reliability of the factors. Other than the original three questionnaire items (R-ser, R-phy, and R-pol) designed for RA, two items (R-colleague and R-boss) were added to this factor. They are the following two questions: To what extent do you agree with the statement “If robots could be my boss, the resource allocation in house would become fairer” (R-boss); and to what extent do you agree with the statement “If robots could be colleagues at my company, I would accept their decisions” (R-colleague). In addition to the aforementioned factors, two questionnaire items were added in the analysis: Q1-GC (“The government will control people’s thoughts and behaviors with developed technology”) and Q2-JS (“My skills at work will become obsolete if technology keeps developing”). The components of each variable and their corresponding factor loading values are presented below (KMO = 0.741). The score of each new variable was calculated as the average of its items. The variables, including RA (M = 2.7, SD = 0.9), FT (M = 3.8, SD = 0.9), SV (M = 2.9, SD = 1.1), QT (M = 3.15, SD = 1.1), and their items are listed in Table 10 below.
With these four factors, we further examined H2 and H3 and proposed one new hypothesis. H4: People’s technology skepticism is associated with robot acceptance.
The correlation test results, as shown in Table 11, indicate that older individuals tend to have higher educational levels (0.2) but less familiarity with technology (-0.150). The three key variables in our analysis, familiarity with technology, perceived social value, and questionable tech impact, were all positively correlated with robot acceptance (0.152, 0.241, and 0.159, respectively). Furthermore, these three variables are positively correlated with each other, suggesting mutual reinforcement.
The regression analysis results are presented in Table 12. In Model 3, which controls for demographic variables, familiarity with technology, perceived social value, and questionable tech impact emerge as positive contributors to robot acceptance, thus supporting H2, H3, and H4. Notably, perceived social value is the strongest contributor among these factors. Interestingly, tech skepticism also shows a positive association with robot acceptance in this model. Furthermore, educational level is a significant contributor in all models, suggesting that individuals with higher education levels are more likely to accept robots. The negative coefficient of gender in Models 1 and 2 indicates that male respondents tend to have higher robot acceptance scores than female respondents (male = 0, female = 1). However, this gender effect becomes insignificant in Model 3 when technology skepticism is controlled.
Discussions
The results of this study reveal a discrepancy in public perceptions of AI’s superiority across professional domains. In the civil service field, less than half of the respondents believed robots would outperform humans, indicating a cautious attitude. Respondents demonstrated an even more conservative attitude toward AI robots in law enforcement and healthcare, where power distance between service providers and receivers is significant in the human scenario. Less than one-third expressed a positive view of robotic physicians and police officers, highlighting a considerable trust gap between humans and computers in these sensitive areas.
Why doesn’t the high power distance observed between service providers and receivers in human interactions, particularly in law enforcement and healthcare, transfer to robotic scenarios? This study suggests that a high perceived human-computer power distance could explain this phenomenon. When technology operates as a “black box” in these contexts, it potentially amplifies the perceived risks associated with its application, encouraging a risk-averse attitude towards evaluating the machine’s performance.
The perceived risks stem from the inherent nature of the work performed by physicians and police officers. For physicians, their work involves diagnostic accuracy in clinical settings, often encompassing physical contact and even surgery. Similarly, police officers primarily enforce the law, potentially using firearms to maintain order. Both professions demand infallible judgment to ensure the safety of patients or the public. The potential consequences of mistakes in these high power distance fields are severe, fostering caution towards visible AI applications. In contrast, civil servants typically perform tasks with lower associated risks. This explains the comparatively higher acceptance of robotic civil servants than physicians and police officers, as the perceived consequences of potential errors are less severe.
In examining the factors associated with AI robot acceptance, H2, H3, and H4 were all supported by the data through regression analysis, indicating that perceived social value of technology, technology familiarity, and technology skepticism contribute to explaining public acceptance of AI robots across professional fields. Furthermore, the coefficients indicate a positive association between these variables and the level of AI robot acceptance. While the results for perceived social value of technology and technology familiarity align with existing literature, the positive relationship between technology skepticism and AI robot acceptance is intriguing. The two questionnaire items grouped under technology skepticism pertain to fears about the government using technology as a controlling tool and worries about working skills not keeping pace with advancing technology. These fears do not deter users from utilizing technology for at least two reasons. First, the Technology Acceptance Model posits that perceived usefulness and perceived ease of use of a technology are associated with technology acceptance (Davis, 1989). This suggests that when it comes to personal technology usage, micro-level considerations, such as individual needs and preferences, outweigh macro-level concerns, such as fears of job displacement or government surveillance, which are beyond personal control. Second, technology skepticism can motivate individuals to engage with technology, either to alleviate fears of being left behind or to increase awareness of potential surveillance.
Conclusions
The accelerating pace of digitalization has created a paradigm shift, weaving technology into our daily lives. Driven by media coverage and societal expectations, technology has become synonymous with progress and innovation. The emergence of products powered by language models like ChatGPT in late 2022 further underscored the ability and efficiency of AI in processing vast datasets. However, the development of this technology has raised concerns about a potential power shift, with humans seemingly losing autonomy to AI. The perceived superiority of AI has widened the power distance between humans and machines. While we may now fear becoming mere tools for machine maintenance, we also enjoy the convenience that comes at the cost of losing some autonomy.
Public distrust of human-like AI, particularly in fields such as law enforcement and healthcare, stems from anxieties about power dynamics and perceived risks. In contexts of high-power distance, this distrust hinders people from believing that machines can outperform humans, ultimately affecting their willingness to cede control over critical decisions. While this study does not advocate for replacing human professionals with AI robots, its findings offer valuable insights for organizations seeking to integrate such technology into public services. Organizations must actively address public concerns about power imbalances and potential risks to foster trust and facilitate responsible AI integration. By understanding and addressing these anxieties, we can pave the way for a future where AI serves as a responsible and ethical partner in society. Our final remark is a call to the AI industry ecosystem to prioritize public perception, because ultimately, the well-being of users and service recipients is essential.
Research Limitations
This study has three limitations. First, the use of a randomized respondent substitution method in household random sampling may introduce bias. Individuals less likely to be available for interviews, such as those in certain age groups, may be underrepresented, potentially skewing the sample demographics. Although chosen as a compromise between obtaining data and respecting participants’ availability, this approach raises concerns about bias. However, an examination of the sample’s demographic characteristics revealed no statistically significant differences compared to the target population.
Second, respondents’ perceptions of robotic civil servants, physicians, and police officers, particularly the latter, can be influenced by portrayals in movies and the media. Stereotypes of robotic police officers often involve exaggerated violence and catastrophic consequences, which may not reflect their intended applications in reality. This reliance on imagined scenarios presents a challenge in accurately assessing public attitudes towards these emerging technologies. Nevertheless, this study highlights conservative attitudes towards new public policies with which people are unfamiliar. Public resistance to such applications often stems from anxieties and unfamiliarity. While the exact rate of acceptance from this study may not be entirely representative, the pattern of overall attitudes is worth noting.
Third, the perceived social value of technology and acceptance of AI robots scales demonstrate low internal consistency, indicating potential issues with measurement reliability. This is likely due to the wide age diversity of the sample, particularly regarding topics where age-related differences are expected. Additionally, there may be variability in respondents’ imaginations about technology and robots, which could have influenced their interpretations of the survey items. Future research could explore ways to mitigate this variability, such as providing more specific definitions or examples of robots within the survey instrument.
Despite these limitations, this study addresses the timely and critical issue of public acceptance of AI robots in diverse professional domains. This research contributes to a deeper understanding of public attitudes towards AI robots and informs the development strategies for their successful integration into professional environments.
Funding
This project has received funding from the National Science and Technology Council, Taiwan. Grant no: 111-2423-H-194-001(Digital Hegemony and the Crisis of Social Justice Under Neoliberalism: An Interdisciplinary Perspective).
Unmanned Vehicles Technology Innovative Experimentation Act, 2018. https://law.moj.gov.tw/ENG/LawClass/LawAll.aspx?pcode=J0030147