
Abstract: The original intention of the algorithmic recommender system is to grapple with the negative impacts caused by information overload, but the system also can be used as \"hypernudge\", a new form of online manipulation, to intentionally exploit people's cognitive and decision-making gaps to influence their decisions in practice, which is particularly detrimental to the sustainable development of the digital market. Limiting harmful algorithmic online manipulation in digital markets has become a challenging task. Globally, both the EU and China have responded to this issue, and the differences between them are so evident that their governance measures can serve as the typical case. The EU focuses on improving citizens' digital literacy and their ability to integrate into digital social life to independently address this issue, and expects to address harmful manipulation behavior through binding and applicable hard law, which is part of the digital strategy. By comparison, although there exist certain legal norms that have made relevant stipulations on manipulation issues, China continues to issue specific departmental regulations to regulate algorithmic recommender services, and pays more attention to addressing collective harm caused by algorithmic online manipulation through a multiple co-governance approach led by the government or industry associations to implement supervision.
Keywords: algorithm; manipulation; digital market; the EU; China
CLC: D 93 " " DC: A " " " " " " " " " " Article:2096?9783(2025)02?0138?01
1 Introduction
Big data technology and artificial intelligence continue to penetrate human society, changing the mode of production, liberating and developing human productivity, and transforming interpersonal and production relations. With AI technologies increasingly embedded in the functioning of society, curbing ethical risks associated with the impact of AI on society, human life, and ecosystems has become a central issue in the regulation and governance of AI around the world. Nowadays, the personalized distribution of commercial information such as goods, services and advertisements to different users based on AI systems has become a very common occurrence in the digital market. Intelligent algorithms have routinely become the technical core supporting the business of online commerce platforms, driving the realization of platform business contents like product recommendations, information updates, personalized services and advertisement placements. This enables consumers to have the subjective perception of interacting with the platforms in an environment of Information Overload and enjoy the consumption choices \"tailor-made\" for them by the platforms or convenient solutions that can meet their consumption expectations. In this process, the ability to influence people's decisions and achieve the sales of goods or services is regarded as an advantage of AI systems.
Whether and how to regulate marketing manipulation in digital markets has been a challenge for policymakers, despite the fact that the dynamic nature of online marketplaces and the innovative recommender systems have provided more individualized services for consumers, they may suffer more hidden harm than in traditional markets at the same time[1]. Especially, the Emotional AI that algorithmic models may exploit—in ways intended or not intended by their users—captures and optimizes on expected consumer weaknesses in cognitive or emotional domains, and may be more capable of manipulating users' decisions[2]. The accumulated harm of manipulation by tech giants might also induce systematic risks that damage social trust at the collective level[3]. Given these challenges, the governance of manipulative AI systems in digital markets needs to be discussed.
Overall, the accumulation of data based on AI technology brings about, on the one hand, significant benefits to consumers, on the other hand, and important ethical and legal concerns. How to address the challenges posed by AI and identify the beneficial effects from detrimental ones of online manipulation has become a challenging issue. In this situation, both the EU and China have provided legal responses to these issues. However, the fact is that there are significant differences between the regulations of the EU and China.
2 The Technical Implementation and Hazard of Online Manipulation
The issue of online manipulation is intimately associated with the advancement of information technology. Nevertheless, inasmuch as the very concept of \"Manipulation\" is replete with a multitude of controversies, it is requisite to expound upon the essence and technical implementation of this term prior to comprehending the pernicious impacts of online manipulation.
2.1 Interpretation of the Essence of Manipulation
The concept of \"Manipulation\", which has existed for a long time in the history of human civilization, has been highly debated in the research literature, while the concept is used both as a value-neutral, technical concept and as a value-laden, normative term[4]. Different disciplines in the research literature, such as psychology, communication, marketing, advertising, and public policy, have all provided their own insights into the conceptualization of manipulation, but the relevant content has not yet been integrated into a satisfying normative theory, which has caused certain interference with the accurate expression of the issue of online manipulation[5].
A plausible and widely accepted interpretation of manipulation is from the perspective of law and economics, which is as follows: manipulation is a term used to refer to the performance of behaviors that influence people's choices in ways that do not sufficiently engage or appeal to their capacity for reflection and deliberation. Notably, the term sufficiently is difficult to define clearly because people's decisions are influenced by multiple variables, such as information resources, cognitive abilities, and social norms, and the ambiguity and openness inherent in the concept properly express a combination of considerations of people's reflections or deliberations[6].
Manipulation can be clearly distinguished from persuasion, coercion, and deception, mainly because manipulation is achieved by influencing an individual and attempting to bypass or diminish the individual's deliberative decision-making capacities so that the individual's choices are oriented towards the manipulator's preconceived plan[7]. While persuasion aims to influence an individual's choices, it often preserves and respects the individual's freedom of decision-making after disclosing the full extent of relevant information; coercion and deception deprive an individual of his or her ability to make conscious decisions from different perspectives[7]. A vague understanding of the concept of manipulation can unduly expand the actual impact of the concept on people's choices, especially by conflating it with legitimate, rational persuasion. For example, warnings, as defined above, may also be seen as a method of manipulation but are intended only to inform people of the risks[6]. Thus, it is not appropriate to define whether an act is manipulated by the subjective cognitive criteria of the person.
Besides, the main idea of philosophers and political theorists is that manipulation, at bottom, means leading someone along, inducing them to behave as the manipulator wants[8]. In other words, the greatest harm of manipulation is the violation of an individual's autonomy: in a manipulative environment, the manipulated person plays a more passive role than the manipulator because he or she is unable to understand the manipulator's true intentions or to discern the full consequences of his or her actions[9].
In practice, threats, punishments, rewards, offers and nudges are specific forms of manipulation. The types of manipulation from threats and punishments are more pronounced, as they are both designed to influence the choices and actions of the manipulated by imposing negative consequences on them, which are easily resisted by the manipulated to varying degrees. However, the types of manipulation from rewards and offers tend to mobilise and seduce the manipulated with \"expectations\", and there may even be cases in which the manipulated are aware of the behaviour in question but still adopt a non-resisting or welcoming attitude[10].
Nudge is perhaps the most controversial form of manipulation. People's decision-making is not always based on rational consideration. Conscious experience, impressions, intuition, preferences, and other unconscious brain activities can often serve as an important basis for decision-making, therefore, the design and adjustment of decision-making environments, such as shortcuts of intuitive thinking - known in psychology as \"heuristics and biases\" can influence people's decision-making[11]. For example, supermarkets selectively place best-selling or promotional items on shelves at eye level, and application developers steer users to choose to default to that software for online behaviour. The uniqueness of nudge over other forms of manipulation is that it does not prohibit people from making any choices, nor does it significantly alter the economic incentives for people to consider making choices, but instead guides people to make or change choices predictably. It is, therefore, a soft form of design-based control for manipulation.
2.2 Manifestation of Online Manipulation in Digital Markets
The development of algorithmic technology has further expanded the toolbox for manipulation: hypernudge has become a new type of nudge in the Internet environment[12]. Unlike traditional forms of nudge, hypernudge is based on a complex algorithmic process that analyses a user's habits, preferences, and interests based on the individual user's data sources and provides relevant action predictions and countermeasures. Hypernudge has the characteristics of networking, automation, dynamisation, and universal applicability that traditional forms of nudge do not have. In particular, it may cause negative consequences by anchoring consumers' decision-making context and anchor frame of reference to recommend relevant products to them, thus harming their rights and interests[8].
On this basis, online manipulation can be understood as a particular form of manipulation facilitated by information technology that takes advantage of the cognitive and decision-making loopholes of people and intentionally and covertly guides their actions to influence their response choices, in which intelligent algorithms play a crucial role as tools. As the technical foundation of online manipulation lies in the effective integration and utilisation of data resources, and the recommendation of goods/services based on algorithms is directly constrained by the ecology of the upstream information sources, we will examine the characteristics of the technical architecture of the computing system on which online manipulation is based[13].
First, the realisation of online manipulation relies on the increasingly sophisticated technology of Big Data. Big Data, therefore, is shorthand for collating, processing, analysing, and using vast exploitable datasets of unstructured and structured digital information[14]. Big Data technology is at the core of contemporary digital marketplaces and business development of online commerce platforms, and its significant value lies in discovering the patterns of individual activities and establishing connections between individuals and other individuals or groups through the collection and organisation of massive amounts of fragmented data. In the process of human-machine interactions between online platforms and users, based on data sources such as personal identity information, geographic location, transaction records, health information, social interaction information, and traces of product browsing, data analytics can use the data generated by each individual in the online environment as the basis for predicting and analysing the behaviour of individuals in future events, accurately grasping the private characteristics of each individual such as preferences, interests, habits, integrating and summarising the fragmented data in an orderly manner into the User Portrait[15], which makes it possible to model human behaviours and tendencies for various purposes[16]. In the information choice architecture, Big Data analytics can be supercharged by automated decision-making processes that do not require human intervention and digital decision-guidance processes that guide individual decisions[12].
Second, in computing systems, recommendation algorithms are designed to be embedded and persuasive manipulation tools, which, through automated, all-encompassing dynamic recording of data and adjustment of predictions, creates a \"filter bubble\" to narrow the scope of people's information navigation, to achieve the filtering and control of the information received by people, so as to achieve the final result of substantial influence on people's behaviour and choice[7]. The accuracy and effectiveness of the recommendation algorithm is the key for the platform to gain user stickiness. If the recommended content does not meet the user's needs or preferences, it will seriously affect the user experience, so the algorithms and algorithmic modelling applied in the system have become an important strategic market resource and core competitiveness. With the advancement of data accumulation and processing technology, the accuracy of algorithmic prediction based on big data is constantly improving[17].
Recommendation algorithms have a variety of patterns, mainly including: (1) content-based filtering algorithm that takes the user's preferred content as a reference and finds the content with a higher degree of match from it[18]; (2) collaborative filtering algorithm, which is the most widely used algorithm in business practice and makes recommendations based on the similarity of users and content[19]; (3) content traffic pool-based algorithm based on the relevant content browsing, liking, forwarding, searching, finished broadcast rate and other comprehensive indicators as a benchmark for realising the stacked recommendation[19]. Since the overall traffic of the whole platform is relatively stable in a short period, the tilting of traffic will bring infinite possibilities for individuals, which is also the fundamental reason why many users become famous overnight for their uploaded short video works or merchandise marketing programs.
Based on the technical mechanisms of the AI recommendation system, this behaviour may have double-edged sword effects at the same time. On the one hand, algorithmic recommendation systems are essential for online platforms in real-world situations where people have limited attention spans, as they help users to enjoy the benefits of technological advances and gain user stickiness despite the dilemma of information overload[20]. However, online manipulation in the form of hypernudge could also be implemented based on algorithmic recommendation systems that attempt to trick recommender algorithms for specific purposes[21]. At the individual level, content-based recommendation algorithms can subtly drive people to focus only on their areas of interest, placing them in self-selected cocoons of homogenised information, isolating them from other voices or topics of opinion, leading to the echo chamber effect[22]. And because homogenised content can be cross-matched to different users through collaborative filtering recommendation algorithms, this effect exacerbates the range of users in the information cocoon and amplifies social risks. In addition, at the collective level, the platform can adjust the weighting indexes and utilise the overlay recommendation algorithm based on the content flow pool to recommend specific content to more users, which directly affects the information selection structure of users and thus realises the purpose of manipulation on the users' countermeasure selection.
On the whole, the manipulative effectiveness and persuasive power of hypernudge are more pronounced because online manipulation can continuously market products or services by triggering consumers' personal preferences, privacy, or information vulnerabilities, compared with guiding consumption through generalized justifications for new product launches and limited-time promotions. This indicates that online manipulation has a higher rate of success and, therefore, a more significant impact at both the individual level and the overall allocation of resources in the market. Thus, in the process of online manipulation, the algorithmic system initially used to improve the efficiency of information retrieval and reduce transaction costs may be transformed and alienated into a tool to help platforms pursue unfair benefits, causing actual or potential harm to consumers' rights and interests, which may evolve into systemic risk[23]. However, similar to pre-digital traditional advertising or face-to-face manipulative marketing techniques, it is still essential to distinguish between beneficial persuasion through algorithmic recommendation and harmful manipulation.
2.3 The Hazard to Sustainability of Online Manipulation
The phenomenon of information cocoons emerges as the correlations among data are harnessed and applied to direct the orientation of information selection within algorithmic automated decision-making systems. The networked and perpetually dynamic technological architecture inherent in these systems empowers these systems to meticulously and highly personalize the configuration of users' information selection milieus. Through by artfully molding users' comprehension within particular scenarios and employing the tactic of gentle, yet insidious, persuasion rather than blatant and forceful coercion, such systems surreptitiously manipulate and steer users towards making decisions that have been preordained and predetermined[12].
And the essence of this subtly persuasive approach lies in the fact that online business platforms could explore the relationship between users' target needs and data resources through recommendation algorithms, and achieve the customization of information according to different kinds of users' needs. It appears to return the decision-making authority to individual users, but it actually restricts users' decision-making ability and the range of products or offerings they can choose from.
In light of the opacity characterizing hypernudge and data processing procedures, consumers find themselves bereft of the practical capacity to acquire knowledge and exercise choices in the context of such dissemination modalities. Consequently, they would be rendered incapable of making decisions that are both reasonable and in alignment with their psychological anticipations. As a matter of fact, the algorithm itself could serve as a mechanism or an actual force for mobilizing and allocating social resources, and manipulate users' actions within a complex technological ecosystem. At this point, online platforms can utilize their advantageous position in information asymmetry, which is gained through large-scale collection and analysis of user data, and purposefully adjust the information presented on the user side and thereby achieve the purpose of manipulation while effectively capturing substantial profits.
Online manipulation seriously affects consumers' decision-making autonomy through hypernudge and leads them to make irrational purchases in the digital market. This negative consequence can also be understood from the perspective of \"Welfare\": Manipulation will lead to inefficient outcomes in resource allocation. Consumers choose an irrational suboptimal transaction, which means that they do not act in accordance with their normal consumption expectations or preferences. As a result, consumers obtain non-optimal or even unwanted products, while merchants obtain undeserved funds, reducing the best resource allocation opportunities for other market players regarding this sum of money [24].
By influencing users' information selection architecture, online business platforms can further reap profits at the price level. A common phenomenon is that they can offer different retail prices for the same product or service to consumers with different spending capacities — this is called \"Personalized Pricing\" or \"Algorithmic Pricing\", and economists tend to refer to this phenomenon as \"Price Discrimination\"[25].
For instance, in the U.S., the ride-hailing platform Uber has faced complaints as it makes dynamic price adjustments based on users' mobile phone battery levels. The rationale is that users with low battery levels typically have a more urgent need for immediate transportation services. In China, online business platforms like Meituan and Pinduoduo have also been subject to administrative penalties from regulatory authorities on account of personalized pricing issues [26].
In the long term, consumers will find themselves in an even more disadvantaged position within the online market, and this situation is highly prone to triggering intense resentment and even a trust crisis, thereby exerting an adverse influence on the overall sustainable development of the online market. Obviously, as the scale of the manipulated entities continues to expand, the collective harm radiating from individuals will pose a significant and evident threat to the overall development of the economy and society, with far-reaching and adverse impacts that cannot be overlooked.
3 Response of the EU and China
Both the EU and China have recently introduced governance countermeasures regarding the issue of online manipulation, but there are obvious differences between these countermeasures. Besides, this issue is actually related to the existing legal regulations, and it needs to be analyzed from a broader perspective.
3.1 Recent Developments in the EU
On May 21, 2024, the Council of the European Union approved the Artificial Intelligence Act (AIA). As the highest level of legislation in the EU legal system, the AIA has wide-ranging implications not only because it is directly binding on AI users in the EU, providers and deployers of AI systems located outside of the EU, or their licensees located in the EU but also because the Act itself has become an example of an AI governance solution for the world. In Article 5 of AIA, the manipulative AI systems that would make the person take an irrational decision that causes significant harm are explicitly listed as prohibited artificial intelligence practices with the most stringent level of regulatory governance, and this regulation also applies to the digital markets because it doesn't clearly define the limitations of its scope of application. Although few AI systems are designed to manipulate consumers or investors, in reality, some that can influence users' decisions and may be considered manipulative will be affected by this act.
Article 5 of the AIA requires the effect to be determined as unlawful manipulation practices that materially distorts a person's behaviour (of a person pertaining to that group) in a manner that causes or is likely to cause that person or another person physical or psychological harm. This could be achieved due to: the characteristic of technology——An AI system deploys subliminal techniques beyond a person's consciousness (AIA § 5.1(a)) or the characteristic of people——An AI system that exploits any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability (AIA § 5.1(b)). The two approaches show the bias and vulnerability that may arise from human decision-making from the perspective of technical effects and human physiology.
It is easy to reach a consensus on the discrimination of the vulnerable groups referred to by human physiology, for example, adolescents have imperfect cognition, and their decisions often lack factual basis and rational thinking. However, the subliminal technique mentioned in the technical features lacks a precise definition. Given that the natural variations in cognitive abilities among individuals, it is extremely challenging to establish a causal link between manipulative AI systems and physical or psychological harm, even when citizens' electoral rights are infringed, such harm still does not fall within the purview of Article 5 of the AIA.
Considering that the overly broad interpretation of \"Manipulative\" will surely erect non-technical barriers to the healthy development of the Internet economy, thus severely undermining the welfare that technological progress bestows upon people's lives. In general, the AIA carefully prohibits AI manipulation only when it \"causes or is likely to cause that person or another person physical or psychological harm\", which avoids the prohibition of common manipulation practices that might not lead to serious harm for most people. Many members of the civil society, such as the European Consumer Organization, state that AI that manipulates humans in a way that causes economic or societal harm is not covered by the AIA proposal, only AI that causes physical or psychological harm through manipulation is, and European Digital Rights and Amnesty International also added that the specific vulnerabilities listed should be very limited-only age and physical or mental disability are covered[27].
Due to the broad effects of AI manipulation, the regulation may also cover provisions relating to consumer rights protection, data protection, e-commerce, security and regulation of digital markets under the EU laws of General Data Protection Regulation (GDPR), Digital Services Act (DSA), Digital Markets Act (DMA), etc. For example, section 4 of GDPR stipulates the right to object and automated individual decision-making, especially Article 21 stipulates the data subject shall have the right to object processing of personal data concerning him or her for such marketing or profiling, and Article 22 stipulates the data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling. Furthermore, DSA, DMA, GDPR, AIA all stipulate obligations for transparency and fairness in digital services to ensure fair and transparent data processing practices and to avoid the occurrence of online manipulation.
In addition, the EU takes improving citizens' digital literacy and their ability to integrate into digital social life as long-term and continuous social work. The EU has been publishing the Digital Competence Framework for Citizens (DigComp) since 2013 to comprehensively enhance citizens' digital literacy, which is an important basis for its formulation of policies related to digital skills. The 2.2 version updated in March 2022 is a response to the higher requirements for citizens' digital literacy posed by emerging technologies, such as AI, virtual and augmented reality, robotisation, and other events that have occurred in human society since the release of the 2.1 version in 2017. DigComp 2.2 emphasizes that the competences are a combination of knowledge, skills, and attitudes, and is subdivided into five competence areas, namely information and data literacy, communication and collaboration, digital content creation, safety, and problem solving. The proficiency levels are divided into four levels: foundation, intermediate, advanced, highly specialised, and it provides 259 examples of knowledge, skills, and attitudes. DigComp 2.2 helps and facilitates citizens to correctly understand the technology of hypernudge and serves as an important factor for them to break free from online manipulation. In DigComp 2.2, e.g.4. aware that search engines, social media and content platforms often use AI algorithms to generate responses that are adapted to the individual user (e.g. users continue to see similar results or content), and this is often referred to as \"personalisation\"[28].
3.2 Recent Developments in China
Regarding the issue of hypernudge with algorithmic recommendation services, China has specifically introduced the Internet Information Service Algorithmic Recommendation Management Regulations (IISARMR) in 2021. In Article 8 of the IISARMR, providers of algorithmic recommendation services have been asked to conduct regular audits, evaluations and verifications on algorithmic mechanisms, models, data and application results, etc., and not to set up algorithmic models that induce users to become addicted, overspend or violate laws, regulations, ethics or morals. Article 14 explicitly prohibits the regulated manipulation of user accounts and rankings by algorithms. The IISARMR also stresses the importance of protecting against weakness due to physical conditions, such as the elderly and children in Articles 18 and 19, and due to the unique positions of workers in Article 20 and consumers in Article 21, though without explicitly mentioning manipulation risks to those groups. In addition, in terms of user initiative, Article 16 stipulates the user's right to know the situation of algorithmic recommendation services, and Article 17 further stipulates the user's right to reject or partially reject algorithm services.
Before the IISARMR, there had already been other regulations that had made relevant stipulations on manipulation issues. For example, in Consumer Rights Protection Law (CRPL), Article 9 stipulates the consumer's right to choose and prohibits operators from conducting price discrimination, giving 1 or misleading publicity, and deceiving and misleading consumers, Article 23 stipulates that when providing goods or services, operators shall obtain consent for the use of personal data and not excessively collect consumers' personal information. Furthermore, Article 24 of Personal Information Protection Law (PIPL) clearly stipulates that information processors that use personal data to make algorithmic decisions shall ensure the transparency of the decisions and the fairness and impartiality of the results, and are prohibited from offering unjust recommendations based on personal preferences. Meanwhile, people have the right to reject the service with algorithmic decision-making when the decision has a significant impact on personal rights or interests.
At the same time, co-regulation and self-regulation in China are also very active. There are at least two levels of co-regulation in China. The first level is often led by certain industry associations that implement rules and standards for mitigating the risks of AI. For example, China Network Audiovisual Program Service Association (CNAPSA) issued the Standard Rules for Reviewing the Content of Online Short Videos and the Management Code for Online Short Video Platforms to refine the content review requirements for short video platforms. The National Information Security Standardization Technical Committee issued the Guide to Cybersecurity Standard Practice - Guidelines for Preventing Ethical Security Risks of Artificial Intelligence. The Shenzhen Artificial Intelligence Industry Association and dozens of AI companies, including Kuangshi, ObiZhongguang, Kukai, Gaoxin, Yunyi, Testin Cloud Test, Elite Vision, KDDI, Ubiquitous, etc., jointly initiated the first New Generation of Artificial Intelligence Industry Self-discipline Convention[29]. The second level is Chinese multi-departmental meeting system led by the government, which addresses risks by administrative organs and urges online operators to take corrective measures as mandated.
In addition, at the request of the government or industry associations, online platforms in China have to issue guidelines to promote self-regulation. For example, Douyin, as a video-sharing platform where users can generate content and share their lives, is operated based on algorithmic recommendation systems. To safeguard the users' interests, Douyin User Service Agreement grants platforms the authority to review content and address violations of users, such as giving advance warnings, rejecting publication, immediately stopping the transmission of information, deleting content, temporarily prohibiting the publication of content or comments, restricting some or all functions of the account, or terminating the provision of services, permanently closing the account, and taking other disposal measures as stipulated by laws and regulations [30] .
4 Evaluation of Governance Measures of the EU and China
From an objective perspective, the online manipulation achieved through AI systems indeed has negative effects on the normal life of social members and the order of economic operations. However, on one hand, the original intention and the vital function of developing and applying hypernudge with AI systems are to offset the negative impacts brought about by information overload, and it theoretically should not be completely prohibited. On the other hand, manipulation itself has long existed in the history of human civilization, especially in the normal situation of interaction between consumers and salespeople, and the degree of property damage to consumers caused by manipulation through artificial intelligence and related digital technologies nowadays does not differ significantly from that of interactive manipulation through human-to-human interaction in the real world. As a consequence, both the EU and China have strengthened the scrutiny of such behaviour by issuing specific regulations on harmful algorithmic manipulation, but manipulation practices that manifest as hypernudge using AI systems are not directly prohibited in these two jurisdictions.
Obviously, both the EU and China have noticed the potential harm of hypernudge, and similar approaches for restricting or regulating hypernudge reflect their common concerns on the protection of human welfare. It can be proven that the Explanatory Memorandum 1.1 of the AIA points out that its ultimate aim is increasing human well-being which is based on EU values and fundamental rights, and Article 1 of the IISARMR also states that protecting the legitimate rights and interests of citizens, legal persons and other organizations is one of the most important objectives. Although both the EU and China aim at making AI be a force for good in society in terms of their AI governance concepts, there are evident discrepancies between them in the comprehension and implementation of the term of good, which also gives rise to differences in their AI governance models.
Specifically, the EU opts to construct rules for AI available in the Union market or otherwise affecting people should therefore be human centric, so that people can trust that the technology is used in a way that is safe and compliant with the law, including the respect of fundamental rights. In other words, the EU adopts legislation as the core governance approach because the term of good is understood as being individual, and the way to achieve increasing human well-being refers to taking the perspective of increasing the well-being of individuals and protecting the rights of individuals as the starting point of the AI governance concept, by constructing a system of rights and obligations in the field of AI. In this regard, the EU Framework of ethical aspects of AI, robotics and related technologies clearly states that any new regulatory framework for AI should fully respect the Charter and thereby respect human dignity, autonomy and self-determination of the individual, and the ultimate aim of increasing every human being's well-being[31].
The words that \"individual\" and \"every\" therein more specifically indicate the legislative concept of the AIA. Furthermore, in order to prevent citizens from losing their self-decision-making ability and being in a passive position in the digital environment, DigComp 2.2 provides detailed guidance on how to improve citizens' literacy and ability in dealing with online manipulation.
In contrast, in China, the term of good takes the collective interest as the starting point, and the way to achieve collective interests is not through the superposition of individual well-being. AI governance for increasing human well-being places greater emphasis on the requirements of national security, social public interests, economic order, and social order. In addition to indicating that the legitimate rights and interests of individuals should be protected, Article 1 of the IISARMR also emphasizes that its legislative purpose is to safeguard national security and public interests, maintain the healthy and orderly development of Internet information services, and carry forward the core socialist values. For this reason, although there exist certain legal norms in the AI field in China, the AI governance scheme here features a more salient multiple co-governance that comprises government supervision at its core.
As shown in the general regulatory framework of online manipulation in the following table, we offer a comparison of typical governance measures of the EU and China, so that the differences in their handling methods can be observed at the implementation level.
Apparently, there is a significant difference between the governance strategies of online manipulation with AI systems in the EU and China. The EU expects to address harmful manipulation behavior through hard law which is binding and applicable throughout the EU. The AIA is not intended to create a technological monopoly on rules through regulation. Instead, as part of the digital strategy, it aims to provide better conditions for the development and utilization of AI innovative technologies. As a result, the AIA pioneers the risk - based regulatory strategy, which means that the higher the risks for fundamental individual rights and society are, the stricter the regulatory rules will be.
However, in China, matters related to algorithmic recommendation services are mainly dealt with through departmental regulations and co-regulation. According to Article 3 of the IISARMR, the national cyberspace administration department is responsible for coordinating the governance of algorithmic recommendation services and relevant supervision, while relevant departments of the State Council, such as the Telecommunications Department, the Public Security Department, and the Market Supervision Department, are responsible for their corresponding management work according to their respective responsibilities. And the Chinese Multi-departmental Meeting System, which is ordinarily led by these departments, plays an important role in facilitating communication and coordination among operations. It means the implementation of supervision in China is more flexible and the feedback is more prompt. But on the one hand, departmental regulations may not be applicable to the trial of civil cases or binding on courts in specific cases. On the other hand, the current regulations lack relatively clear technical standards, and there is no clear definition regarding algorithmic online manipulation of the degree of damage for punishment, as a result, other normal recommended services may be affected.
5 Conclusion
Online manipulation is undoubtedly a controversial topic that can easily lead to 1 positives or 1 negatives, and the differing approaches of the EU and China, to some extent, reflect their complex political and cultural backgrounds. The EU focuses more on manipulation targeting on individual differences that would cause harm to personal autonomy and decision-making abilities, while China seems to offer more protection in various aspects for consumers' personal information vulnerabilities from structural conditions, and empowers administrative organs to limit the abuse of AI systems, helping to deal with potential conflicts in advance. Although both of the EU and China have offered plausible explanations for their approaches, the comparative analysis of two different jurisdictions becomes essential for human kind to tackle their common challenge, and it is still necessary to draw lessons from the governance measures of each other. In particular, since the technical essence of hypernudge is to intentionally exploit people's cognitive and decision-making gaps to influence their decisions covertly, improving citizens' digital literacy to help them cope with online manipulation should be a crucial task, and China may come up with more countermeasures regarding this issue by drawing on the EU's experience.
References:
[1] SPENCER S B. The problem of online manipulation[J]. University of Illinois Law Review, 2020(3): 959?1006.
[2] HACKER P. Manipulation by algorithms: Exploring the triangle of unfair commercial practice, data protection, and privacy law[J]. European Law Journal, 2023, 29(1-2): 142?175.
[3] VEALE M, BORGESIUS F. Demystifying the draft eu artificial intelligence act[J]. Computer Law Review International, 2021(4): 97?112.
[4] NYS T R V, ENGELEN B. Judging nudging: Answering the manipulation objection[J]. Political Studies, 2017, 65(1): 199?214.
[5] WILKINSON T M. Nudging and manipulation[J]. Political Studies, 2013(2): 341?355.
[6] SUNSTEIN C R. The ethics of influence: government in the age of behavioral science[M]. Cambridge: Cambridge University Press, 2016: 82?83.
[7] BOTESM. Autonomy and the social dilemma of online manipulative behavior[J]. AI and Ethics, 2023(3): 315?323.
[8] SUSSER D, ROESSLER B, NISSENBAUNM H. Technology, autonomy, and manipulation[J]. Internet Policy Review, 2019(2): 1?22.
[9] VAN DIJK T A. Discourse and manipulation[J]. Discourse amp; Society, 2006(3): 359?383.
[10] FADEN R R, BEAUCHAMP T L. A history and theory of informed consent[M]. Oxford: Oxford University Press, 1986: 356.
[11] DANIEL K. Thinking, Fast and Slow[M]. New York: Farrar, Straus and Giroux, 2011: 1?8.
[12] YEUNG K. 'Hypernudge': big data as a mode of regulation by design[J]. Information, Communication amp; Society, 2017(1): 118?136.
[13] KLIESTIK T, ZVARIKOVA K, LAZAROIU G. Data-driven machine learning and neural network algorithms in the retailing environment: Consumer engagement, experience, and purchase behaviors[J]. Economics, Management and Financial Markets, 2022(1): 57?69.
[14] KEMP R. Legal aspects of managing Big Data[J]. Computer Law amp; Security Review, 2014(5): 482?491.
[15] CHEN Y, HE J, WEI W, et al. A multi-model approach for user portrait[J]. Future Internet, 2021(6): 1?14.
[16] ROUVROY A. Of data and men: fundamental rights and freedoms in a world of big data[C]. Bureau of the Consultative Committee of the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data [ETS 108], 2016: 1?38.
[17] ANDREW J, BAKER M. The general data protection regulation in the age of surveillance capitalism[J]. Journal of Business Ethics, 2021(3): 565?578.
[18] JALILI M, AHMADIAN S, IZADI M, et al. Evaluating collaborative filtering recommender algorithms: a survey[J]. IEEE access, 2018(6): 74003?74024.
[19] CHEN Z, SHI C. Analysis of algorithm recommendation mechanism of TikTok[J]. International Journal of Education and Humanities, 2022(1): 12?14.
[20] PERRA N, Rocha L E C. Modelling opinion dynamics in the age of algorithmic personalization[J]. Scientific Reports. 2019(9): 7261.
[21] SU C, VALDOVINOS KAYE B. Borderline practices on Douyin/TikTok: content transfer and algorithmic manipulation[J]. Media, Culture amp; Society, 2023(8): 1534?1549.
[22] HOU L, PAN X, LIU K, et al. Information cocoons in online navigation[J]. Iscience, 2023(1): 1?16
[23] CALO R. Digital market manipulation[J]. George Washington Review, 2014(4): 995?1051.
[24] ZARSKY T Z. Privacy and manipulation in the digital age[J]. Theoretical Inquiries in Law, 2019(1): 157?188.
[25] DESCAMPS A, KLEIN T, SHIER G. Algorithms and competition: the latest theory and evidence[J]. Competition Law Journal, 2021(1): 32?39.
[26] VICE Staff. Uber Accused of Charging People More If Their Phone Battery Is Low[EB/OL]. (2023-04-11)[2024-10-16]. https: //www. vice. com/en/article/m7beq8/uber-surge-pricing-phone-battery; Xinhua. China's regulator fines 5 community group-buying firms over unfair pricing[EB/OL]. (2021-03-04)[2024-10-16]. http: //www. china. org. cn/business/2021-03/04/content_77270272. htm.
[27] UUK R. Manipulation and the AI Act[EB/OL]. (2022-01-018)[2024-11-01]. https: //futureoflife. org/wp-content/uploads/2022/01/FLI-Manipulation_AI_Act. pdf.
[28] Joint Research Centre. DigComp 2. 2: The Digital Competence Framework for Citizens[EB/OL]. (2022-03-17) [2024-11-01]. https: //pact-for-skills. ec. europa. eu/community-resources/publications-and-documents/digcomp-22-digital-competence-framework-citizens_en.
[39] Shenzhen Artificial Intelligence Industry Association. The First Self-Discipline Convention for the Artificial Intelligence Industry Has Been Announced by Industry Association[EB/OL]. (2019-08-18) [2024-11-07]. https: //www. saiia. org. cn/index. php/2019/08/18/1212/.
[30] Douyin. Douyin User Service Agreement[EB/OL]. (2024-08-29) [2024-11-14]. https: //www. douyin. com/draft/douyin_agreement/douyin_agreement_user. html?ug_source=sem_baiduamp;id=6773906068725565448.
[31] European Parliament. Framework of ethical aspects of artificial intelligence, robotics and related technologies[EB/OL]. (2020-10-20) [2024-11-22]. https: //www. europarl. europa. eu/doceo/document/TA-9-2020-0275_EN. html.
數字市場中算法在線操縱問題的規制研究
——歐盟與中國的因應策略
顧晨昊a, b,武 " 謙a, b
(a.北京師范大學 法學院,北京100875;b.北京師范大學 法治發展研究中心,廣東 珠海519087)
摘 要:算法推薦系統的研發立意在于應對信息過載引發的消極影響,但該系統在實踐中也可以被塑造為一種新型在線操縱形式——“過度助推”,通過刻意利用人們的認知及決策漏洞來影響其抉擇,嚴重阻礙數字市場的可持續發展。有效制約數字市場中有害算法在線操縱任重道遠。全球范圍內,歐盟與中國就此議題提供了應對策略,且考慮二者差異顯著,能夠作為對比研究的典型。歐盟一方面注重通過提高公民個人的數字素養及融入數字社會生活的能力以獨立解決這一問題;另一方面期望通過具有明確法律約束力且切實可行的“硬法”規制該類問題,以符合其數字戰略。相比之下,盡管對于操縱問題已有法律規定,但中國繼續出臺了規范算法推薦服務的專門規章,并且更注重通過政府或行業協會主導的多元共治模式落實監管要求,以處理算法在線操縱引發的集體性損害。
關鍵詞:算法;操縱;數字市場;歐盟;中國