Abstract
Background
Existing research on facial emotion processing in Internet gaming disorder (IGD) has focused on single facial expression but little is known about crowd facial emotion (present multiple facial expressions simultaneously) ensemble coding. Thus, this event-related potential (ERP) study aimed to investigate temporal dynamics of crowd facial emotion ensemble coding under interference in IGD.
Methods
17 IGD and 17 control group (CG) participants completed a task of extracting mean emotion from crowd facial expressions under emotional interference while electroencephalographic activity was recorded.
Results
The N170 amplitudes elicited by crowd facial expressions in IGD were significantly smaller than in CG. Angry crowd faces evoked larger N170 amplitudes than happy crowd faces in IGD. Happy crowd faces elicited more negative early posterior negativity (EPN) amplitudes than angry crowd faces in CG, while no difference was found in IGD. In the later ensemble coding stage, we found a significant three-way interaction between the group, emotional valence and interference in the frontal negative slow wave component.
Conclusions
IGD participants exhibited weaker ensemble coding ability of crowd facial expressions. They showed an automatic processing bias towards angry crowd faces in the early stage, as well as insensitivity to happy crowd faces in the subsequent selective processing stage during mean emotion extraction. In the later stage, IGD participants failed to actively adopt appropriate cognitive strategies to inhibit interference. This study first provided electrophysiological evidence for the characteristics of crowd facial emotion ensemble coding in IGD and contributed to clarifying how IGD affects social cognition.
Introduction
Internet gaming disorder (IGD) is defined as “the persistent and recurrent use of the internet to engage in games, often with other players, causing impairment or clinically significant distress” (American Psychiatric Association, 2013). According to recent studies, the prevalence of IGD ranges from 2% to 17% across cultures and ages (Q. Chang & He, 2024; Zhou, Yao, Fang, & Gao, 2022), indicating that IGD is a major global public health issue. Notably, excessive internet gaming behavior that results in reduced interest in real social activities and impaired social functioning is an important criterion for the diagnosis of IGD (American Psychiatric Association, 2013). In particular, a diminished ability to process facial expressions is a crucial indicator of impaired social functioning in IGD (Fan, He, Zheng, Li, & Meng, 2023; Fan, He, Zheng, Nie, et al., 2023; Nie, Pan, He, & Li, 2024).
Abnormal facial expression processing of IGD
IGD individuals exhibit processing bias towards angry expressions and difficulty in effectively suppressing task-irrelevant emotional interfering (Q. Chang & He, 2024). Specifically, recent behavioral studies found that IGD participants were more accurate in recognizing angry micro-expressions (MEs) than happy MEs compared to healthy controls (Fan, He, Zheng, Nie, et al., 2023), and had more lenient criteria and higher sensitivity level for recognizing angry MEs (Fan, He, Zheng, Li, & Meng, 2023). Moreover, clinical scores were positively correlated with angry MEs recognition accuracy but negatively correlated with happy MEs recognition accuracy (Fan, He, Zheng, Nie, et al., 2023). These findings suggest that IGD individuals exhibit a response bias towards angry MEs, which may be related to addiction severity. However, these existing studies have focused more on the processing of a single face, with less emphasis on crowd facial emotion processing.
Existing studies on emotional interference in IGD integrating emotional stimuli to serve as interferences in executive function tasks (L. Wu et al., 2020), found that it was more difficult for IGD individuals to inhibit emotional interference (Chen, Yu, & Gao, 2022; Lee et al., 2015; Shin, Kim, Kim, & Kim, 2021). For instance, Chen et al. (2022) found that facial expressions, when used as interfering stimuli, induced smaller nogo-N2 amplitudes and larger nogo-P3 amplitudes than when they were used as target stimuli, implying that IGD individuals required more cognitive resources to enhance goal-directed processing when facing emotional interference.
ERP components in this study
EEG is a non-invasive way to measure electrical brain activity, represented as potential differences between scalp electrodes (Dietrich & Kanso, 2010). Event-related potential (ERP) studies average EEG signals from many time-locked trials to investigate stimulus processing dynamics with millisecond precision (Hinojosa, Mercado, & Carretié, 2015; Light et al., 2010). Different ERP components, defined by scalp distribution, time window, and polarity, reveal cognitive processing at various levels (Dietrich & Kanso, 2010; Reinke, Deneke, & Ocklenburg, 2024).
The N170 is a negative component observed at occipital-temporal electrodes 120–200 ms post-stimulus (He, Liu, Guo, & Zhao, 2011; Luo, Feng, He, Wang, & Luo, 2010). It is sensitive to face stimuli (Rossion & Caharel, 2011; Schindler, Bruchmann, & Straube, 2023; Schweinberger & Neumann, 2016), reflecting bottom-up automatic facial processing (Xia, Li, Ye, & Li, 2014), with the main neural generator being the fusiform gyrus (Deffke et al., 2007; C. Gao, Conte, Richards, Xie, & Hanayik, 2019). Many studies have found that the N170 is modulated by emotional valence (Brenner, Rumak, Burns, & Kieffaber, 2014; Peng, Cui, Wang, & Jiao, 2017; Rellecke, Sommer, & Schacht, 2013; Rossignol et al., 2012). Specifically, threatening stimuli like angry faces generally evoke more negative N170 amplitudes than happy faces, indicating a threatening processing bias (O'Toole, DeCicco, Berthod, & Dennis, 2013; Rellecke, Sommer, & Schacht, 2012).
The EPN is a negative deflection observed at the occipital-temporal electrodes, peaking 200–300 ms post-stimulus (Aldunate, López, & Bosman, 2018; Langeslag, Gootjes, & Van Strien, 2018), reflecting early selective attention (Vormbrock, Bruchmann, Menne, Straube, & Schindler, 2023; Wieser, Pauli, Reicherts, & Mühlberger, 2010) and is associated with perceptual enhancement in the extrastriate visual cortex (Rellecke et al., 2012; Schupp, Stockburger, Codispoti, et al., 2007). Studies have found that threatening expressions elicited stronger EPN component than happy expressions (Rellecke et al., 2012; Schupp, Öhman, et al., 2004; Wieser et al., 2010). Additionally, the EPN is modulated by task goals (Schindler & Bublatzky, 2020; Schmuck, Schnuerch, Kirsten, Shivani, & Gibbons, 2023; Schupp, Stockburger, Bublatzky, et al., 2007), with target facial expressions eliciting larger negative EPN amplitudes than non-target facial expressions (Schmuck et al., 2023).
The slow wave occurs at approximately 500–1,000 ms post-stimulus, with an anterior negative and posterior positive gradient (Forester, Halbeisen, Walther, & Kamp, 2020; Matsuda & Nittono, 2018), and it is suggested that the frontal negative slow wave may originate in the prefrontal lobes (Matsuda & Nittono, 2015, 2018). In working memory tasks, the frontal slow negative wave is modulated when participants purposefully manipulate memory information (retaining, concealing, or revealing), suggesting that this slow wave is related to goal-based working memory control (Bosch, Mecklinger, & Friederici, 2001; Forester et al., 2020; Matsuda & Nittono, 2018).
The ensemble coding of crowd facial emotion
Unlike the processing of single facial expressions, observers can rapidly extract mean emotion from a crowd of facial expressions, a process known as ensemble coding of facial expressions (Haberman & Whitney, 2007, 2009). This ability is crucial to effectively navigate social interactions (Goldenberg, Weisz, Sweeny, Cikara, & Gross, 2021). Furthermore, during ensemble encoding, outlier faces are discounted yet still contribute to mean emotion extraction (Haberman & Whitney, 2010). Salient emotional faces attract attention and drive overevaluation (positive difference between estimated and actual values) of mean emotion (Goldenberg et al., 2021). These findings provide a basis for the setup of interfering facial expressions in this study. Liu et al. (2023) found that ensemble representation dominated at short presentation times (100 ms) of crowd facial expressions, while individual representation became complete and prioritized at longer times (750 ms). It is evident that, the ensemble coding pattern of crowd facial expressions changes over time and is well suited to be explored using the high temporal resolution ERP technique, which are difficult to achieve with behavioral experiments.
The current study
This ERP study aimed to investigate the temporal dynamics of crowd facial expressions ensemble coding in IGD under interfering condition (extracting mean emotion from target crowd facial expressions in the presence of the non-target facial expression). Together with the findings mentioned above, we proposed the following hypotheses: 1. Compared to control group (CG), IGD group would exhibit lower ability in ensemble coding of crowd facial expressions, as evidenced by smaller N170 amplitudes. 2. IGD participants would exhibit a processing bias towards angry crowd faces in the early stage of extracting mean emotion, as reflected by the mean emotion overevaluation of angry crowd faces and more negative N170 or EPN amplitudes than happy crowd faces. 3. In the later ensemble coding stage, IGD individuals would have more difficulty suppressing the interference of facial expressions with different emotional intensities, as reflected by the fact that high emotional intensity interference leads to mean emotion overevaluation. Given the limited research on the neural mechanisms of crowd face processing under interference, we approach the ERP investigation with an exploratory attitude.
Methods
Participants
Thirty-four undergraduates and postgraduates from Liaoning Normal University, meeting all screening criteria, participated as paid volunteers and each received a monetary reward. Among them, 17 were IGD participants (11 females; aged 18–27 years, Mage = 22.53, SD = 3.06), and 17 were CG participants (10 females; aged 19–25 years, Mage = 22.94, SD = 1.75). The sample size of this study was referenced to previous studies on the neural mechanisms of face processing in IGD or excessive internet use (He et al., 2011; He, Zheng, Fan, Pan, & Nie, 2019; Peng et al., 2017). All the participants were right-handed and had normal or corrected to normal vision. In addition, all identified IGD participants were screened through surveys (N = 107) with strict diagnostic criteria, which were preferentially administered to respondents who enjoy playing internet games.
Similar to previous studies (Fan, He, Zheng, Li, & Meng, 2023; Fan, He, Zheng, Nie, et al., 2023; Tian et al., 2014; Z. Zhang et al., 2023; Zhou et al., 2022), the modified version of Young's Internet Addiction Test (IAT) (Young, 1998) was used as one of the measures to diagnose IGD participants. IAT consists of 20 items on a five-point Likert scale (ranging from 1 = rarely to 5 = always), which is a valid and reliable measurement tool for screening IGD participants (Pawlikowski, Altstötter-Gleich, & Brand, 2013; Widyanto & McMurran, 2004), especially for Asian college students (Frangos, Frangos, & Sotiropoulos, 2012). IAT scores higher than 50 indicate occasional or frequent problems with internet usage (Dong & Potenza, 2016). Moreover, we adopted the Nine-Item Internet Gaming Disorder Scale–Short-Form (IGDS9-SF) as another screen measurement to select IGD participants. The purpose of this instrument is to assess the severity of IGD and its detrimental effects by examining both online and/or offline gaming activities occurring over a 12-month period (Pontes & Griffiths, 2014). IGDS9-SF is widely used for assessing and diagnosing IGD in empirical studies of biopsychosocial correlates (R. Chang et al., 2023) and is highly suitable for the measurement of IGD (Pontes & Griffiths, 2015), demonstrating desirable reliability and validity (Monacis, Palo, Griffiths, & Sinatra, 2016; Pontes & Griffiths, 2015; Severo et al., 2020; Sit et al., 2023; Wong et al., 2020; T. Wu et al., 2017; Yam et al., 2019). The scale consists of 9 items, each corresponding to the 9 core diagnostic criteria of the DSM-5 (Pontes & Griffiths, 2014). IGDS9-SF is also a five-point Likert scale (ranging from 1 = never to 5 = almost always), using 21 as the cutoff score for diagnosis (Monacis et al., 2016; Severo et al., 2020; Wong et al., 2020).
Accordingly, the criteria for screening IGD participants combined with their average daily online gaming hours over the last 12 months were as follows: (1) IAT score higher than 50, (2) IGDS9-SF score higher than 21, (3) spent more than two hours per day on internet gaming over the past 12 months, (4) free of mental disorders, personality disorders, substance addictions, behavioral addictions other than IGD, and central nervous system diseases. The criteria for screening CG participants were as follows: (1) IAT score lower than 40, (2) IGDS9-SF score was less than or equal to 21, (3) spent at most two hours per day on internet gaming over the past 12 months, (4) free of mental disorders, personality disorders, substance addictions, behavioral addictions, and central nervous system diseases. Moreover, meta-analyses have shown that depression and anxiety are the main comorbidities in IGD populations (Y. Gao, Wang, & Dong, 2022), causing facial expression processing abnormalities (Mardaga & Iakimova, 2014; Q. Zhang, Ran, & Li, 2018). Therefore, this study controlled for depression and anxiety to eliminate the confounding effects on crowd facial emotion processing (Peng et al., 2017). We measured the levels of depression and anxiety in participants after the experiment was completed using the 2nd edition of Beck's Depression Inventory (BDI) and Beck's Anxiety Inventory (BAI). Both scales consist of 21 items, each rated from 0 to 3, with higher scores indicating more severe depression/anxiety. Both BDI and BAI show good reliability and validity (A. T. Beck, 1961; Aaron T. Beck, Brown, Epstein, & Steer, 1988; Nie et al., 2024). Although there were significant differences in BDI and BAI scores between the IGD and CG participants, the average BDI and BAI scores of the two groups were within the normal range (Bardhoshi, Duncan, & Erford, 2016; Jackson-Koku, 2016). The demographic characteristics of the participants are described in Table 1.
Demographic information and clinical characteristics of the participants
IGD (n = 17) M ± SD | CG (n = 17) M ± SD | t | p | Cohen's d | |
Age (year) | 22.53 ± 3.06 | 22.94 ± 1.75 | −0.48 | 0.634 | 0.165 |
Gaming time per day (hour) | 5.42 ± 3.19 | 1.01 ± 0.85 | 4.20 | <0.001*** | 1.892 |
Gaming experience (year) | 7.24 ± 4.66 | 3.71 ± 2.69 | 2.71 | 0.011* | 0.928 |
IGDS9-SF (score) | 29.00 ± 5.83 | 13.94 ± 4.12 | 8.70 | <0.001*** | 2.984 |
IAT (score) | 67.47 ± 9.70 | 31.88 ± 5.17 | 13.35 | <0.001*** | 4.580 |
BAI (score) | 5.24 ± 4.25 | 2.65 ± 2.45 | 2.18 | 0.039* | 0.746 |
BDI (score) | 7.29 ± 7.30 | 2.12 ± 5.21 | 2.38 | 0.024* | 0.817 |
Note: *p ≤ 0.05, **p ≤ 0.01, ***p ≤ 0.001.
Stimulus
Two models (one male and one female) were selected from the Tsinghua Facial Expression Database (Yang et al., 2020) to represent angry, happy and neutral faces. To ensure significant valence differences between angry and happy faces while matching arousal levels (Posner, Russell, & Peterson, 2005; Russell, 1980), we recruited another group of participants to conduct a material evaluation experiment before the formal experiment. The t-test showed that emotional valence of the angry and happy faces of the two models showed significant differences [t (59) = −21.26, p < 0.001], whereas there was no significant difference in arousal [t (59) = 0.44, p = 0.660]. Hair, ears, neck, and other exterior details were cropped with an oval frame in Photoshop 2024. Using Fanta Morph 5, each identity was morphed into 50 morphed faces by linearly interpolating between the neutral face and angry/happy face. An emotional unit, defined as the variance between two contiguous face pictures obtained through morphing, was converted to subjective emotion intensity for further calculation (Liu et al., 2023). Each face image (270 × 380 pixels) subtended 3.53° × 4.74° of the visual angle. The face set, consisting of four such items, subtended 7.06° × 9.48° of the visual angle was presented at the center of the screen on a black background. Photoshop 2024 was used to standardize the luminance and contrast of all facial pictures.
Previous studies on the ensemble coding of crowd faces have employed face sets with uniformly distributed emotional intensity (Goldenberg et al., 2021; Goldenberg, Sweeny, Shpigel, & Gross, 2020; Haberman & Whitney, 2009). Besides, these studies typically followed the presentation of crowd facial expressions with either an adjustable facial expression (the test face; Goldenberg et al., 2020; Goldenberg et al., 2021; Haberman & Whitney, 2009) or a set of adjustable facial expressions (the test set; Haberman, Lee, & Whitney, 2015). Participants were instructed to adjust the test face or set to match the mean emotion of the crowd facial expressions that had been previously presented. Additionally, Haberman and Whitney (2010) found that the information provided by four of the 12 faces enabled participants to perform no differently on the task than when all 12 faces were presented. Therefore, to minimize the impact of eye movements on EEG data, the crowd faces in this study consisted of four facial expressions. The mean emotion of four facial expressions in each set was randomly selected among 11–40 emotional units, with each facial expression assigned mean ±3 or ±9 units. The minimum difference between any two facial expressions of one face set was 6 emotional units, ensuring a suprathreshold separation (Haberman & Whitney, 2011). In this study, participants adjusted the expression of a single test face to indicate the mean emotion they estimated.
Procedure
Participants sat 60 cm away from a monitor with a resolution of 1,920 × 1,080. The experimental program was developed using MATLAB R2022b. At the beginning of each trial, a white cross was displayed at the center of the screen for 600–800 ms. Following this, the crowd faces were presented for 1,500 ms before being replaced by the corresponding scrambled faces for 200 ms. These scrambled faces, created with Photoshop 2024 in 5-pixel units, were intended to disrupt further processing of facial expressions. Afterwards, participants had 4 s to mark the mean emotion (the average of emotional units) of the three facial expressions with the same emotional valence on the progress bar. This was followed by a blank screen for 1,000 ms.
The crowd facial expressions were divided into two categories: one category consisted of three angry faces and one happy face, and the other consisted of three happy faces and one angry face (i.e., the one facial expression with the different emotional valence is the interfering face). Throughout the experiment, the two types of crowd facial expressions and the position of the interfering expression within each set were presented randomly. Participants were instructed to estimate the mean emotion of three facial expressions with the same emotional valence, while ignoring the single facial expression with a different emotional valence. Specifically, if emotional units of the interfering facial expression exceeded the mean emotion of four faces (mean +3 or +9), it was categorized as high emotional intensity interference. In contrast, if emotional units of the interfering facial expression were smaller than the mean emotion of four faces (mean −3 or −9), it was classified as low emotional intensity interference (Haberman & Whitney, 2010). The progress bar ranged from 1 (neutral face) to 50 (extremely angry/happy face) emotional units, with the slider moving when participants held down the left mouse button and dragged left and right. As participants dragged the slider, the emotional units above the progress bar changed accordingly. The intensity of the facial expression above the progress bar increased as the slider was moved to the right and decreased as it was moved to the left. To counteract the anchoring effect (Goldenberg et al., 2021; Oriet & Brand, 2013), the starting point of the slider on the progress bar was randomized between trials (Haberman & Whitney, 2010). Participants released the left mouse button and clicked the right mouse button to confirm their estimation when they thought the facial expression above the progress bar matched the mean emotion of the three previously presented same-valence crowd facial expressions. The whole adjustment process was limited to 4 s, and the response was invalid if the right mouse button was not pressed within 4 s.
Before the formal experiment, participants were required to complete 20 practice trials. After the practice session, the experimenter checked out the results of the practice, including the difference between the participant's estimated and actual values for each trial and the validity of the response (whether or not the response was made within 4 s) to ensure that the participant understood the experimental task accurately and completed it effectively. The formal experiment consisted of 8 blocks of 60 trials per block, for a total of 480 trials, with 120 trials each condition. Participants were allowed to rest for more than 1 min between blocks. The formal experiment lasted about 50 min (Fig. 1).
The scheme of trial procedure. The angry crowd facial expressions with the happy interference and the happy crowd faces with the angry interference were randomly presented. Participants were constructed to judge the mean emotion of the three faces with the same valence on the progress bar
Citation: Journal of Behavioral Addictions 2025; 10.1556/2006.2025.00027
Electrophysiological recording and data preprocessing
According to the extended 10-20 system (Brain Products, Munich, Germany), EEG data from 64 scalp sites were recorded using tin electrodes mounted on elastic caps at a sampling rate of 1,000 Hz/channel, referenced to the electrode FCz. An electrode was positioned 10 mm below the right eye to record a vertical electrooculogram. The impedance of the electrodes was maintained below 5 kΩ. The continuous EEG signals underwent bandpass filtering within the frequency range of 0.01–100 Hz.
EEG data were analyzed offline in EEGlab (version 2022.0; Delorme & Makeig, 2004) and ERPlab (version 9.10; Lopez-Calderon & Luck, 2014) toolboxes implemented in MATLAB R2022b. The raw data were digitally filtered with a bandpass filter (0.1 and 30 Hz) and average referenced. Channels exhibiting persistent noise were processed with EEGlab's multivariate local weighted regression tool to perform interpolation prior to the application of an average reference. Then, the signal was segmented into epochs ranging from 200 ms before to 800 ms after the onset of the crowd facial expressions. Only the trials with valid responses were selected. Independent component analysis (ICA; EEGlab “runica” function) was employed to rectify a range of artifacts including eye movements, eye blinks and muscle activity. During this phase, 30 components were set for the ICA decomposition, with the removal of ICA components primarily guided by visual inspection. An average of 1.32 ± 0.64 (M ± SD) ICA components were rejected each participant. The maximum and minimum numbers of ICA components removed in the samples were 2 and 0, respectively. EEG was segmented into epochs beginning 200 ms before stimulus onset and extending for 1,000 ms (i.e., -200–800 ms). The period of 200 ms prior to stimulus onset was utilized as the baseline for calibrating the amplitudes of ERPs. Trials with ERP recordings that exhibited amplitudes exceeding ±80 μV were excluded. On average, there were 116.10 ± 5.49 (M ± SD) trials for the angry crowd faces under low emotional intensity interference condition and 115.82 ± 6.89 trials under high emotional intensity interference condition; 115.76 ± 6.52 trials for the happy crowd faces under low emotional intensity interference condition and 115.97 ± 5.61 trials under high emotional intensity interference condition.
We analyzed ERPs elicited by the crowd facial expressions under the condition of emotional interference. The amplitudes of N170, EPN and frontal negative slow wave components were measured and analyzed. Based on previous research (Hinojosa et al., 2015; Matsuda & Nittono, 2018; Schindler & Bublatzky, 2020) and the topographical distribution of the grand-averaged ERP activities, the electrodes and time windows were selected as follows: The N170 component was measured at the electrodes located in the posterior region of the brain (P7, P8, PO7, and PO8) during 165–185 ms; the EPN component was measured at the six electrodes which were also in the posterior region of the brain (P7, P8, PO7, PO8, O1, and O2), within 250–320 ms; the frontal negative slow wave component was measured at the central-frontal electrodes (Cz, C1, C2, C3, C4, FCz, FC1, FC2, FC3, FC4, Fz, F1, F2, F3, and F4) within 500–700 ms. We averaged the selected electrodes of interest.
Design and data analysis
This study was a 2 (group: IGD vs. CG) × 2 (emotional valence: angry vs. happy) × 2 (interference: low emotional intensity vs. high emotional intensity) three-factor mixed experimental design. Among these factors, group is a between-subjects independent variable, while emotional valence and interference are within-subjects independent variables.
Behavioral analysis
The difference between the estimated mean emotion and the actual mean emotion of three faces with the same valence as well as the absolute value of the difference were adopted as the dependent variables in the behavioral analysis. The 2 (group: IGD vs. CG) × 2 (emotional valence: angry vs. happy) × 2 (interference: low emotional intensity vs. high emotional intensity) repeated measures ANOVAs were conducted on the two indicators.
ERP analysis
2 (group: IGD vs. CG) × 2 (emotional valence: angry vs. happy) × 2 (interference: low emotional intensity vs. high emotional intensity) × 2 (hemisphere: left vs. right) repeated measures ANOVAs were conducted on the N170 and EPN amplitudes, respectively (hemisphere is a within-subjects independent variable). Moreover, 2 (group: IGD vs. CG) × 2 (emotional valence: angry vs. happy) × 2 (interference: low emotional intensity vs. high emotional intensity) repeated measures ANOVAs were conducted on the frontal negative slow wave component.
All statistical analyses were carried out using SPSS version 27.0. Both the behavioral data and the ERP amplitudes were analyzed with repeated measures ANOVAs, with p-values corrected based on the Greenhouse-Geisser method when the assumption of sphericity was violated. The Bonferroni correction was implemented to correct for multiple comparisons in post hoc comparisons. For significant interactions, further analysis was conducted through simple-effect analysis. Moreover, partial eta-squared values were calculated to assess the effect size of the statistical results.
Ethics
This study complies with the Declaration of Helsinki and was approved by the Institutional Reviewing Board of Liaoning Normal University (No. LL2024112). Prior to the experiment, all participants provided written informed consent.
Results
Behavioral results
One of the dependent variables in this study is the difference between the estimated mean emotion and the actual mean emotion of three facial expressions with the same valence (Goldenberg et al., 2021). The ANOVA revealed that the main effect of emotional valence was significant [F (1,32) = 7.17, p = 0.012, ηp2 = 0.18, large effect size], wherein the difference of angry crowd faces (M ± SE, 0.62 ± 0.46) was larger than the difference of happy crowd faces (−0.47 ± 0.45). Besides, the main effect of interference reached significance [F (1,32) = 187.53, p < 0.001, ηp2 = 0.85, large effect size], and the difference of low emotional intensity interference (−1.00 ± 0.42) was smaller than high emotional intensity interference (1.14 ± 0.40). Further, the interaction of emotional valence and interference was significant [F (1,32) = 28.66, p < 0.001, ηp2 = 0.47, large effect size], with a simple-effect analysis showing that under high emotional intensity interference condition, the difference of angry crowd faces (1.94 ± 0.44) was larger than the difference of happy crowd faces (0.35 ± 0.47) [F (1,32) = 13.83, p < 0.001, ηp2 = 0.30], while there was no significant effect was found under the condition of low emotional intensity interference condition (−0.71 ± 0.48 vs. −1.28 ± 0.45). No significant effects of group, or any significant interaction except the above were found (all ps > 0.05).
Another dependent variable is the absolute value of the difference between the estimated mean emotion and the actual mean emotion of three facial expressions with the same valence (Liu et al., 2023). For the absolute value of the difference, the ANOVA revealed that the main effect of emotional valence was significant [F (1,32) = 40.70, p < 0.01, ηp2 = 0.56, large effect size]. Specifically, the absolute value of angry crowd faces (M ± SE, 6.75 ± 0.26) was larger than happy crowd faces (5.77 ± 0.24). Additionally, the main effect of interference was significant [F (1,32) = 4.27, p = 0.047, ηp2 = 0.12, medium effect size], and the absolute value of high emotional intensity interference (6.36 ± 0.24) was larger than low emotional intensity interference (6.16 ± 0.24). Neither the main effect of group nor any significant interaction reached significance (all ps > 0.05).
ERP results
N170 component
The repeated measures ANOVA revealed that the main effect of group was significant [Fig. 2a; F (1,32) = 5.33, p = 0.028, ηp2 = 0.14, large effect size], and the N170 amplitudes of IGD (M ± SE, −1.69 ± 0.85 μV) were smaller than CG (−4.48 ± 0.85 μV). Moreover, the main effect of interference was significant [Fig. 2b; F (1,32) = 6.19, p = 0.018, ηp2 = 0.16, large effect size], with the N170 amplitudes of low emotional intensity interference (−3.19 ± 0.61 μV) being greater than high emotional intensity interference (−2.99 ± 0.60 μV). In addition, the main effect of hemisphere was also significant [F (1,32) = 4.76, p = 0.037, ηp2 = 0.13, medium effect size], with the N170 amplitudes recorded over the right hemisphere (−3.65 ± 0.77 μV) were larger than left hemisphere (−2.52 ± 0.52 μV). Furthermore, the two-way interaction between group and emotional valence was significant [Fig. 2c; F (1,32) = 6.28, p = 0.018, ηp2 = 0.16, large effect size]. The simple effect analysis revealed that the N170 amplitudes elicited by angry crowd faces (−1.82 ± 0.87 μV) in IGD were significantly larger than happy crowd faces (−1.57 ± 0.84 μV), [F (1,32) = 4.76, p = 0.037, ηp2 = 0.13], while there was no significant difference in CG (−4.40 ± 0.87 μV vs. −4.56 ± 0.84 μV). Except for the above, no significant effects of emotional valence, or any significant interaction were found (all ps > 0.05).
a) The ERP waveforms of the N170 component for IGD (purple line) and CG (pink line) were depicted at the representative electrode PO8 and the corresponding difference topographical maps during the N170 time window (165–185 ms). b) The ERP waveforms of the N170 component for low emotional intensity interference (light gray line) and high emotional intensity interference (dark gray line) conditions were depicted at the representative electrode PO8 and the corresponding difference topographical maps during the N170 time window (165–185 ms). c) The ERP waveforms of the N170 component for IGD angry crowd faces (solid cyan line), IGD happy crowd faces (dashed cyan line), CG angry crowd faces (solid orange line), and CG happy crowd faces (dashed orange line) conditions were depicted at the representative electrode PO8 and the corresponding difference topographical maps during the N170 time window (165–185 ms)
Citation: Journal of Behavioral Addictions 2025; 10.1556/2006.2025.00027
EPN component
The repeated measures ANOVA revealed that the main effect of interference was significant [Fig. 3a; F (1,32) = 7.66, p = 0.009, ηp2 = 0.19, large effect size], where the EPN amplitudes of low emotional intensity interference (M ± SE, 2.17 ± 0.49 μV) were more negative than high emotional intensity interference (2.38 ± 0.52 μV). Further, the two-way interaction between group and emotional valence was significant [Fig. 3b; F (1,32) = 7.56, p = 0.010, ηp2 = 0.19, large effect size]. The simple effect analysis revealed that the EPN amplitudes elicited by happy crowd faces (2.03 ± 0.70 μV) in CG were significantly more negative than the EPN evoked by angry crowd faces (2.28 ± 0.72 μV), [F (1,32) = 5.09, p = 0.031, ηp2 = 0.14], while there was no significant difference in IGD (2.48 ± 0.70 μV vs. 2.30 ± 0.72 μV). In addition, the two-way interaction between interference and hemisphere was significant [F (1,32) = 8.42, p = 0.007, ηp2 = 0.21, large effect size]. The simple effect analysis revealed that the EPN amplitudes under the condition of low emotional intensity interference (2.10 ± 0.51 μV) were more negative than the high emotional intensity interference condition (2.42 ± 0.54 μV) in right hemisphere [F (1,32) = 10.69, p = 0.003, ηp2 = 0.25], while there was no significant difference in left hemisphere (2.23 ± 0.53 μV vs. 2.34 ± 0.55 μV). Apart from the above results, there were no other significant main effects or interactions (all ps > 0.05).
a) The ERP waveforms of the EPN component for low emotional intensity interference (light gray line) and high emotional intensity interference (dark gray line) conditions were depicted at the representative electrode PO8 and the corresponding difference topographical maps during the EPN time window (250–320 ms). b) The ERP waveforms of the EPN component for IGD angry crowd faces (solid cyan line), IGD happy crowd faces (dashed cyan line), CG angry crowd faces (solid orange line), and CG happy crowd faces (dashed orange line) conditions were depicted at the representative electrode PO8 and the corresponding difference topographical maps during the EPN time window (250–320 ms)
Citation: Journal of Behavioral Addictions 2025; 10.1556/2006.2025.00027
Frontal negative slow wave component
The repeated measures ANOVA revealed that the three-way interaction of group, emotional valence and interference was significant [Fig. 4; F (1,32) = 9.47, p = 0.004, ηp2 = 0.23, large effect size]. Specifically, the simple effect analysis revealed that when CG participants processed angry crowd faces, high emotional intensity interference elicited more negative frontal negative slow wave (M ± SE, −0.56 ± 0.36 μV) than low emotional intensity interference (−0.19 ± 0.33 μV), [F (1,32) = 9.04, p = 0.005, ηp2 = 0.22]. Meanwhile, when CG participants faced high emotional intensity interference, frontal negative slow wave amplitudes induced by angry crowd faces were more negative (−0.56 ± 0.36 μV) than happy crowd faces (−0.11 ± 0.35 μV), [F (1,32) = 13.17, p < 0.001, ηp2 = 0.29]. However, there was no significant difference under any condition in IGD (−0.03 ± 0.33 μV vs. 0.01 ± 0.36 μV vs. 0.01 ± 0.32 μV vs. −0.09 ± 0.35 μV). Besides the interaction mentioned above, we did not find any other significant main effects or interactions (all ps > 0.05). The non-significant behavioral and ERP results are reported in the Supplementary Materials.
The ERP waveforms of the frontal negative slow wave component for IGD angry crowd faces under high emotional intensity interference (light red solid line), IGD angry crowd faces under low emotional intensity interference (light red dashed line), IGD happy crowd faces under high emotional intensity interference (light blue solid line), IGD happy crowd faces under low emotional intensity interference (light blue dashed line), CG angry crowd faces under high emotional intensity interference (dark red solid line), CG angry crowd faces under low emotional intensity interference (dark red dashed line), CG happy crowd faces under high emotional intensity interference (dark blue solid line), and CG happy crowd faces under low emotional intensity interference (dark blue dashed line) conditions were depicted at the representative electrode FC1 and the corresponding difference topographical maps during the frontal negative slow wave time window (500–700 ms)
Citation: Journal of Behavioral Addictions 2025; 10.1556/2006.2025.00027
Discussion
This ERP study investigated ensemble coding of crowd facial expressions in IGD under emotional interference. Unlike studies on the processing of single facial expression, our findings provided electrophysiological insights into rapid extraction of mean emotion from crowd facial expressions in IGD under emotional interference, thereby enhancing understanding of their social cognitive traits and informing clinical practice (discussions unrelated to the effects of group are provided in the supplementary materials).
IGD participants exhibited weaker ensemble coding ability of crowd facial expressions
In the early stage, IGD participants had lower ability in ensemble coding of crowd facial expressions than CG participants, with the N170 amplitudes of IGD were significantly smaller than CG. Previous behavioral studies on the single facial expression recognition found that IGD individuals had lower recognition accuracy, longer response times (Fan, He, Zheng, Nie, et al., 2023) and decreased sensitivity to MEs than healthy controls (Fan, He, Zheng, Li, & Meng, 2023). These findings indicate that IGD individuals have impairment in facial expression recognition, which is consistent with our findings in the N170 component. Specifically, a consensus is that the N170 is sensitive to facial stimuli, reflecting the early automatic structural encoding of the face (Schindler et al., 2023; Xia et al., 2014). Moreover, the N170 is an early ERP component, a stage when ensemble representation dominates over individual representation (Liu et al., 2023). Therefore, smaller N170 amplitudes of IGD participants indicate weaker crowd facial expressions ensemble coding ability, which supports our first hypothesis.
IGD participants exhibited an automatic processing bias towards angry crowd faces during mean emotion extraction
In the early stage of extracting mean emotion, IGD participants showed an automatic bias towards angry crowd faces, while CG participants showed no bias for either angry or happy crowd faces. Specifically, the result showed that the N170 amplitudes induced by angry crowd faces in IGD were significantly larger than happy crowd faces, while there was no significant difference in CG. Previous behavioral studies on the recognition of single facial expression found that compared to healthy controls, IGD participants were more accurate in recognizing angry MEs than happy MEs (Fan, He, Zheng, Nie, et al., 2023), had more lenient criteria for recognizing angry MEs and exhibited higher sensitivity level to angry MEs (Fan, He, Zheng, Li, & Meng, 2023), These findings indicate an angry facial expression recognition bias of IGD, which is consistent with our findings. Specifically, studies found that more negative N170 amplitudes elicited by threatening facial expressions, such as angry faces, reflect a processing bias (O'Toole et al., 2013; Rellecke et al., 2012). Therefore, IGD participants in this study exhibited an abnormal early processing bias towards angry crowd faces during extracting mean emotion, which partly supports our second hypothesis.
IGD participants exhibited less selective sensitivity to happy crowd faces during mean emotion extraction
During extracting mean emotion, compared to CG participants who exhibited a selective processing bias towards happy crowd faces, IGD participants were less sensitive to happy crowd faces. Our results showed that happy crowd faces elicited more negative EPN amplitudes than angry in CG, while no significant difference was found in IGD. The EPN component reflects selective attention to emotional stimuli and relates to motivated attention to task-relevant stimuli during the perceptual stage (Farkas, Oliver, & Sabatinelli, 2020; Rellecke et al., 2012). Moreover, studies on single facial expression found that the threatening facial expression evoked more negative EPN amplitudes than the happy face, reflecting a processing bias towards threatening expressions (Rellecke et al., 2012; Schupp, Öhman, et al., 2004). In contrast, this study found that CG participants had a selective attention bias towards happy crowd faces.
Nevertheless, many studies on emotional scenes in healthy participants found that pleasant scenes elicited more negative EPN amplitudes than unpleasant scenes (Frank & Sabatinelli, 2019; Schupp, Junghöfer, Weike, & Hamm, 2004; Weinberg & Hajcak, 2010), which is similar to the result in this study. This may suggest that ensemble coding of crowd facial emotion involves more emotional information and more complex processing than single facial expression processing, because this study presented crowd faces for a longer duration (Goldenberg et al., 2021; Haberman & Whitney, 2010), allowing for more adequate individual representation to enhance ensemble representation (Li et al., 2016). Notably, we found no selective attentional bias towards happy crowd faces in IGD, which may indicate that IGD individuals are less sensitive to happy crowd faces than normal individuals when active selective processing is required for extracting mean emotion.
IGD participants failed to actively adopt appropriate cognitive strategies to inhibit interferences in the later stage of ensemble coding
CG participants were sensitive to happy interference and the intensity level of it, but IGD participants displayed similar processing patterns across all conditions, suggesting that IGD participants failed to adopt appropriate cognitive strategies to inhibit interferences during ensemble coding. We found that, under high-intensity happy interference, CG showed more negative frontal slow wave amplitudes than under low-intensity happy interference. Meanwhile, under high emotional intensity interference, CG exhibited more negative amplitudes for angry crowd faces (happy interference) than for happy crowd faces (angry interference). However, IGD participants showed no difference in amplitudes under any conditions. Moreover, although not statistically significant, IGD participants' judgments of mean emotion were less accurate than CG in all conditions.
Since the position of the interfering face was random in each trial, participants had to scan four facial expressions to exclude a randomly positioned interfering expression, which may rely more on individual representation and involve working memory. Since individual representation is more fully developed in the later stage of facial expressions ensemble coding (Liu et al., 2023), we anticipated that results related to interference would be observed in this phase. Our findings in the late component of frontal negative slow wave confirmed this hypothesis. The frontal negative slow wave is associated with active control of working memory (Bosch et al., 2001; Forester et al., 2020), and more cognitive resource allocation induces more negative amplitudes (Rösler, Heil, & Röder, 1997). Therefore, the results suggested that, CG participants required more cognitive resources to inhibit high-intensity happy interference than low-intensity happy interference in the later stage of ensemble coding. Meanwhile, under high emotional interference, CG expended more resources resisting high-intensity happy interference than high-intensity angry interference. In contrast, IGD participants showed similar processing patterns across all conditions. This indicates a relatively passive cognitive processing style, suggesting that they did not actively adopt appropriate cognitive strategies to resist interference during ensemble coding. This finding partially supports our third hypothesis. Additionally, the lack of significant behavioral differences between groups, which is different from the second and third hypotheses, may be due to IGD participants being screened by two well-validated questionnaires, resulting in less severe addiction levels than clinically diagnosed patients, with significant differences only found at the electrophysiological level.
Limitations
A few caveats need to be discussed. First, participants were undergraduates and postgraduates identified with IGD via two screening tools, differing from conventional diagnostic methods for treatment-seeking patients. Caution is needed when generalizing these findings to the broader IGD patient population. Second, the sample size for this study, though modest, yielded intriguing insights into the ensemble coding of facial expressions in IGD. Future studies could run a priori power analysis to determine the required sample size and pre-registering the study protocol to increase transparency. Third, the gender ratio of participants is imbalanced. After controlling for the gender factor (see Supplementary Materials), behavioral results showed some changes, but significant ERP group differences remained stable, indicating minimal gender impact. Fourth, this study only involved two facial expression categories. Future research could investigate the ensemble coding of various negative crowd facial expressions in IGD. Fifth, few relevant studies limit ERP interpretation, especially for the frontal negative slow wave component. Meanwhile, this study has limited investigation of neural mechanisms underlying crowd facial expressions processing in IGD. Future research could focus more on differences between crowd and single facial expression processing, employing a variety of methods (e.g., neuroimaging).
Conclusions
We attempted to investigate how IGD individuals process crowd facial emotion under emotional interference and its temporal dynamics. The current study showed that IGD participants exhibited lower ability in facial expression ensemble encoding. They also displayed an early automatic processing bias towards angry crowd faces and reduced sensitivity to happy crowd faces in the further selective processing phase during mean emotion extraction. Under emotional interference, IGD participants exhibited a relatively passive cognitive processing style, suggesting that they may not be able to actively adopt appropriate strategies to inhibit interference in ensemble coding. In summary, this study provided the first electrophysiological evidence for the characteristics of crowd facial emotion ensemble coding in IGD, providing empirical support for the construction of future theories. Meanwhile, the experimental setup and procedure of this study could provide a reference for future relevant research.
Funding sources
This work was supported by the National Natural Science Foundation of China [grant no. 31970991]; and the Liaoning Natural Science Foundation of China [grant no. 2023-MS-252]; and the Scientific Research and Innovation Team of Liaoning Normal University [grant no. 24TD004].
Authors' contribution
QC contributed to data collection, analysis and interpretation, as well as study design and manuscript writing. BH contributed to data collection, analysis and interpretation, software and visualization. CF contributed to critical revision of the manuscript. WL contributed to validation, study supervision and providing resources. WH contributed to critical revision of the manuscript, study supervision, funding acquisition and study conception. All authors refined and approved the submitted version of the manuscript.
Conflict of interest
The authors declare no conflict of interest.
Supplementary material
Supplementary data to this article can be found online at https://doi.org/10.1556/2006.2025.00027.
References
Aldunate, N., López, V., & Bosman, C. A. (2018). Early influence of affective context on emotion perception: EPN or early-N400? Frontiers in Neuroscience, 12, 708. https://doi.org/10.3389/fnins.2018.00708.
American Psychiatric Association (2013). Diagnostic and statistical manual of mental disorders: DSM-5 (Vol. 5). Washington, DC: American psychiatric association.
Bardhoshi, G., Duncan, K., & Erford, B. T. (2016). Psychometric meta‐analysis of the English version of the Beck anxiety inventory. Journal of Counseling & Development, 94(3), 356–373. https://doi.org/10.1002/jcad.12090.
Beck, A. T. (1961). An inventory for measuring depression. Archives of General Psychiatry, 4(6), 561. https://doi.org/10.1001/archpsyc.1961.01710120031004.
Beck, A. T., Brown, G., Epstein, N., & Steer, R. A. (1988). An inventory for measuring clinical anxiety: Psychometric properties. Journal of Consulting and Clinical Psychology, 56(6), 893–897. https://doi.org/10.1037/0022-006X.56.6.893.
Bosch, V., Mecklinger, A., & Friederici, A. D. (2001). Slow cortical potentials during retention of object, spatial, and verbal information. Cognitive Brain Research, 10(3), 219–237. https://doi.org/10.1016/S0926-6410(00)00040-9.
Brenner, C. A., Rumak, S. P., Burns, A. M. N., & Kieffaber, P. D. (2014). The role of encoding and attention in facial emotion memory: An EEG investigation. International Journal of Psychophysiology, 93(3), 398–410. https://doi.org/10.1016/j.ijpsycho.2014.06.006.
Chang, Q., & He, W. (2024). Abnormal emotional processing in people with internet gaming disorder. Advances in Psychological Science, 32(7), 1152–1163. https://doi.org/10.3724/SP.J.1042.2024.01152.
Chang, R., Lee, M., Im, J., Choi, K., Kim, J., Chey, J., … Ahn, W. (2023). Biopsychosocial factors of gaming disorder: A systematic review employing screening tools with well-defined psychometric properties. Frontiers in Psychiatry, 14, 1200230. https://doi.org/10.31234/osf.io/b8fxr.
Chen, Y., Yu, H., & Gao, X. (2022). Influences of emotional information on response inhibition in gaming disorder: Behavioral and ERP evidence from go/nogo task. International Journal of Environmental Research and Public Health, 19(23), 16264. https://doi.org/10.3390/ijerph192316264.
Deffke, I., Sander, T., Heidenreich, J., Sommer, W., Curio, G., Trahms, L., & Lueschow, A. (2007). MEG/EEG sources of the 170-ms response to faces are co-localized in the fusiform gyrus. NeuroImage, 35(4), 1495–1501. https://doi.org/10.1016/j.neuroimage.2007.01.034.
Delorme, A., & Makeig, S. (2004). EEGLAB: An open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. Journal of Neuroscience Methods, 134(1), 9–21. https://doi.org/10.1016/j.jneumeth.2003.10.009.
Dietrich, A., & Kanso, R. (2010). A review of EEG, ERP, and neuroimaging studies of creativity and insight. Psychological Bulletin, 136(5), 822–848. https://doi.org/10.1037/a0019749.
Dong, G., & Potenza, M. N. (2016). Risk-taking and risky decision-making in internet gaming disorder: Implications regarding online gaming in the setting of negative consequences. Journal of Psychiatric Research, 73, 1–8. https://doi.org/10.1016/j.jpsychires.2015.11.011.
Fan, L., He, J., Zheng, Y., Li, C., & Meng, Y. (2023). Sensitivity and response criterion in facial micro-expression recognition among internet gaming disorder. Motivation and Emotion, 47(5), 842–853. https://doi.org/10.1007/s11031-023-10030-5.
Fan, L., He, J., Zheng, Y., Nie, Y., Chen, T., & Zhang, H. (2023). Facial micro-expression recognition impairment and its relationship with social anxiety in internet gaming disorder. Current Psychology, 42(24), 21021–21030. https://doi.org/10.1007/s12144-022-02958-7.
Farkas, A. H., Oliver, K. I., & Sabatinelli, D. (2020). Emotional and feature‐based modulation of the early posterior negativity. Psychophysiology, 57(2), e13484. https://doi.org/10.1111/psyp.13484.
Forester, G., Halbeisen, G., Walther, E., & Kamp, S.-M. (2020). Frontal ERP slow waves during memory encoding are associated with affective attitude formation. International Journal of Psychophysiology, 158, 389–399. https://doi.org/10.1016/j.ijpsycho.2020.11.003.
Frangos, C. C., Frangos, C. C., & Sotiropoulos, I. (2012). A meta-analysis of the reliability of Young’s internet addiction test. In Proceedings of the World congress on engineering (Vol. 1, pp. 368–371). London, United Kingdom: World Congress on Engineering.
Frank, D. W., & Sabatinelli, D. (2019). Hemodynamic and electrocortical reactivity to specific scene contents in emotional perception. Psychophysiology, 56(6), e13340. https://doi.org/10.1111/psyp.13340.
Gao, C., Conte, S., Richards, J. E., Xie, W., & Hanayik, T. (2019). The neural sources of N170: Understanding timing of activation in face‐selective areas. Psychophysiology, 56(6), e13336. https://doi.org/10.1111/psyp.13336.
Gao, Y., Wang, J., & Dong, G. (2022). The prevalence and possible risk factors of internet gaming disorder among adolescents and young adults: Systematic reviews and meta-analyses. Journal of Psychiatric Research, 154, 35–43. https://doi.org/10.1016/j.jpsychires.2022.06.049.
Goldenberg, A., Sweeny, T. D., Shpigel, E., & Gross, J. J. (2020). Is this my group or not? The role of ensemble coding of emotional expressions in group categorization. Journal of Experimental Psychology: General, 149(3), 445–460. https://doi.org/10.1037/xge0000651.
Goldenberg, A., Weisz, E., Sweeny, T. D., Cikara, M., & Gross, J. J. (2021). The crowd-emotion-amplification effect. Psychological Science, 32(3), 437–450. https://doi.org/10.1177/0956797620970561.
Haberman, J., Lee, P., & Whitney, D. (2015). Mixed emotions: Sensitivity to facial variance in a crowd of faces. Journal of Vision, 15(4), 16. https://doi.org/10.1167/15.4.16.
Haberman, J., & Whitney, D. (2007). Rapid extraction of mean emotion and gender from sets of faces. Current Biology, 17(17), R751–R753. https://doi.org/10.1016/j.cub.2007.06.039.
Haberman, J., & Whitney, D. (2009). Seeing the mean: Ensemble coding for sets of faces. Journal of Experimental Psychology: Human Perception and Performance, 35(3), 718–734. https://doi.org/10.1037/a0013899.
Haberman, J., & Whitney, D. (2010). The visual system discounts emotional deviants when extracting average expression. Attention, Perception & Psychophysics, 72(7), 1825–1838. https://doi.org/10.3758/APP.72.7.1825.
Haberman, J., & Whitney, D. (2011). Efficient summary statistical representation when change localization fails. Psychonomic Bulletin & Review, 18(5), 855–859. https://doi.org/10.3758/s13423-011-0125-6.
He, J., Liu, C., Guo, Y., & Zhao, L. (2011). Deficits in early-stage face perception in excessive internet users. Cyberpsychology, Behavior, and Social Networking, 14(5), 303–308. https://doi.org/10.1089/cyber.2009.0333.
He, J., Zheng, Y., Fan, L., Pan, T., & Nie, Y. (2019). Automatic processing advantage of cartoon face in internet gaming disorder: Evidence from P100, N170, P200, and MMN. Frontiers in Psychiatry, 10, 824. https://doi.org/10.3389/fpsyt.2019.00824.
Hinojosa, J. A., Mercado, F., & Carretié, L. (2015). N170 sensitivity to facial expression: A meta-analysis. Neuroscience & Biobehavioral Reviews, 55, 498–509. https://doi.org/10.1016/j.neubiorev.2015.06.002.
Jackson-Koku, G. (2016). Beck depression inventory. Occupational Medicine, 66(2), 174–175. https://doi.org/10.1093/occmed/kqv087.
Langeslag, S. J. E., Gootjes, L., & Van Strien, J. W. (2018). The effect of mouth opening in emotional faces on subjective experience and the early posterior negativity amplitude. Brain and Cognition, 127, 51–59. https://doi.org/10.1016/j.bandc.2018.10.003.
Lee, J., Lee, S., Chun, J. W., Cho, H., Kim, D., & Jung, Y. (2015). Compromised prefrontal cognitive control over emotional interference in adolescents with internet gaming disorder. Cyberpsychology, Behavior, and Social Networking, 18(11), 661–668. https://doi.org/10.1089/cyber.2015.0231.
Li, H., Ji, L., Tong, K., Ren, N., Chen, W., Liu, C. H., & Fu, X. (2016). Processing of individual items during ensemble coding of facial expressions. Frontiers in Psychology, 7. https://doi.org/10.3389/fpsyg.2016.01332.
Light, G. A., Williams, L. E., Minow, F., Sprock, J., Rissling, A., Sharp, R., . . ., & Braff, D. L. (2010). Electroencephalography (EEG) and event‐related potentials (ERPs) with human participants. Current Protocols in Neuroscience, 52(1). https://doi.org/10.1002/0471142301.ns0625s52.
Liu, R., Ye, Q., Hao, S., Li, Y., Shen, L., & He, W. (2023). The relationship between ensemble coding and individual representation of crowd facial emotion. Biological Psychology, 180, 108593. https://doi.org/10.1016/j.biopsycho.2023.108593.
Lopez-Calderon, J., & Luck, S. J. (2014). ERPLAB: An open-source toolbox for the analysis of event-related potentials. Frontiers in Human Neuroscience, 8. https://doi.org/10.3389/fnhum.2014.00213.
Luo, W., Feng, W., He, W., Wang, N., & Luo, Y. (2010). Three stages of facial expression processing: ERP study with rapid serial visual presentation. NeuroImage, 49(2), 1857–1867. https://doi.org/10.1016/j.neuroimage.2009.09.018.
Mardaga, S., & Iakimova, G. (2014). Neurocognitive processing of emotion facial expressions in individuals with self-reported depressive symptoms: The role of personality and anxiety. Neurophysiologie Clinique/Clinical Neurophysiology, 44(5), 447–455. https://doi.org/10.1016/j.neucli.2014.08.007.
Matsuda, I., & Nittono, H. (2015). The intention to conceal activates the right prefrontal cortex: An event-related potential study. NeuroReport, 26(4), 223–227. https://doi.org/10.1097/WNR.0000000000000332.
Matsuda, I., & Nittono, H. (2018). A concealment-specific frontal negative slow wave is generated from the right prefrontal cortex in the concealed information test. Biological Psychology, 135, 194–203. https://doi.org/10.1016/j.biopsycho.2018.04.002.
Monacis, L., Palo, V. D., Griffiths, M. D., & Sinatra, M. (2016). Validation of the Internet Gaming Disorder Scale – Short-Form (IGDS9-SF) in an Italian-speaking sample. Journal of Behavioral Addictions, 5(4), 683–690. https://doi.org/10.1556/2006.5.2016.083.
Nie, Y., Pan, T., He, J., & Li, Y. (2024). Impaired social reward processing in individuals with Internet gaming disorder and its relationship with early face perception. Addictive Behaviors, 153, 108006. https://doi.org/10.1016/j.addbeh.2024.108006.
O'Toole, L. J., DeCicco, J. M., Berthod, S., & Dennis, T. A. (2013). The N170 to angry faces predicts anxiety in typically developing children over a two-year period. Developmental Neuropsychology, 38(5), 352–363. https://doi.org/10.1080/87565641.2013.802321.
Oriet, C., & Brand, J. (2013). Size averaging of irrelevant stimuli cannot be prevented. Vision Research, 79, 8–16. https://doi.org/10.1016/j.visres.2012.12.004.
Pawlikowski, M., Altstötter-Gleich, C., & Brand, M. (2013). Validation and psychometric properties of a short version of Young’s internet addiction test. Computers in Human Behavior, 29(3), 1212–1223. https://doi.org/10.1016/j.chb.2012.10.014.
Peng, X., Cui, F., Wang, T., & Jiao, C. (2017). Unconscious processing of facial expressions in individuals with internet gaming disorder. Frontiers in Psychology, 8, 1059. https://doi.org/10.3389/fpsyg.2017.01059.
Pontes, H. M., & Griffiths, M. D. (2014). Assessment of internet gaming disorder in clinical research: Past and present perspectives. Clinical Research and Regulatory Affairs, 31(2–4), 35–48. https://doi.org/10.3109/10601333.2014.962748.
Pontes, H. M., & Griffiths, M. D. (2015). Measuring DSM-5 internet gaming disorder: Development and validation of a short psychometric scale. Computers in Human Behavior, 45, 137–143. https://doi.org/10.1016/j.chb.2014.12.006.
Posner, J., Russell, J. A., & Peterson, B. S. (2005). The circumplex model of affect: An integrative approach to affective neuroscience, cognitive development, and psychopathology. Development and Psychopathology, 17(03). https://doi.org/10.1017/S0954579405050340.
Reinke, P., Deneke, L., & Ocklenburg, S. (2024). Asymmetries in event-related potentials part 1: A systematic review of face processing studies. International Journal of Psychophysiology, 202, 112386. https://doi.org/10.1016/j.ijpsycho.2024.112386.
Rellecke, J., Sommer, W., & Schacht, A. (2012). Does processing of emotional facial expressions depend on intention? Time-resolved evidence from event-related brain potentials. Biological Psychology, 90(1), 23–32. https://doi.org/10.1016/j.biopsycho.2012.02.002.
Rellecke, J., Sommer, W., & Schacht, A. (2013). Emotion effects on the N170: A question of reference? Brain Topography, 26(1), 62–71. https://doi.org/10.1007/s10548-012-0261-y.
Rösler, F., Heil, M., & Röder, B. (1997). Slow negative brain potentials as reflections of specific modular resources of cognition. Biological Psychology, 45(1–3), 109–141. https://doi.org/10.1016/S0301-0511(96)05225-8.
Rossignol, M., Campanella, S., Maurage, P., Heeren, A., Falbo, L., & Philippot, P. (2012). Enhanced perceptual responses during visual processing of facial stimuli in young socially anxious individuals. Neuroscience Letters, 526(1), 68–73. https://doi.org/10.1016/j.neulet.2012.07.045.
Rossion, B., & Caharel, S. (2011). ERP evidence for the speed of face categorization in the human brain: Disentangling the contribution of low-level visual cues from face perception. Vision Research, 51(12), 1297–1311. https://doi.org/10.1016/j.visres.2011.04.003.
Russell, J. A. (1980). A circumplex model of affect. Journal of Personality and Social Psychology, 39(6), 1161–1178. https://doi.org/10.1037/h0077714.
Schindler, S., Bruchmann, M., & Straube, T. (2023). Beyond facial expressions: A systematic review on effects of emotional relevance of faces on the N170. Neuroscience & Biobehavioral Reviews, 153, 105399. https://doi.org/10.1016/j.neubiorev.2023.105399.
Schindler, S., & Bublatzky, F. (2020). Attention and emotion: An integrative review of emotional face processing as a function of attention. Cortex, 130, 362–386. https://doi.org/10.1016/j.cortex.2020.06.010.
Schmuck, J., Schnuerch, R., Kirsten, H., Shivani, V., & Gibbons, H. (2023). The influence of selective attention to specific emotions on the processing of faces as revealed by event‐related brain potentials. Psychophysiology, 60(10), e14325. https://doi.org/10.1111/psyp.14325.
Schupp, H. T., Junghöfer, M., Weike, A. I., & Hamm, A. O. (2004). The selective processing of briefly presented affective pictures: An ERP analysis. Psychophysiology, 41(3), 441–449. https://doi.org/10.1111/j.1469-8986.2004.00174.x.
Schupp, H. T., Öhman, A., Junghöfer, M., Weike, A. I., Stockburger, J., & Hamm, A. O. (2004). The facilitated processing of threatening faces: An ERP analysis. Emotion, 4(2), 189–200. https://doi.org/10.1037/1528-3542.4.2.189.
Schupp, H. T., Stockburger, J., Bublatzky, F., Junghöfer, M., Weike, A. I., & Hamm, A. O. (2007). Explicit attention interferes with selective emotion processing in human extrastriate cortex. BMC Neuroscience, 8(1), 16. https://doi.org/10.1186/1471-2202-8-16.
Schupp, H. T., Stockburger, J., Codispoti, M., Junghöfer, M., Weike, A. I., & Hamm, A. O. (2007). Selective visual attention to emotion. The Journal of Neuroscience, 27(5), 1082–1089. https://doi.org/10.1523/JNEUROSCI.3223-06.2007.
Schweinberger, S. R., & Neumann, M. F. (2016). Repetition effects in human ERPs to faces. Cortex, 80, 141–153. https://doi.org/10.1016/j.cortex.2015.11.001.
Severo, R. B., Barbosa, A. P. P. N., Fouchy, D. R. C., Coelho, F. M. D. C., Pinheiro, R. T., De Figueiredo, V. L. M., … Pinheiro, K. A. T. (2020). Development and psychometric validation of Internet Gaming Disorder Scale-Short-Form (IGDS9-SF) in a Brazilian sample. Addictive Behaviors, 103, 106191. https://doi.org/10.1016/j.addbeh.2019.106191.
Shin, Y., Kim, H., Kim, S., & Kim, J. (2021). A neural mechanism of the relationship between impulsivity and emotion dysregulation in patients with Internet gaming disorder. Addiction Biology, 26(3), e12916. https://doi.org/10.1111/adb.12916.
Sit, H. F., Chang, C. I., Yuan, G. F., Chen, C., Cui, L., Elhai, J. D., & Hall, B. J. (2023). Symptoms of internet gaming disorder and depression in Chinese adolescents: A network analysis. Psychiatry Research, 322, 115097. https://doi.org/10.1016/j.psychres.2023.115097.
Tian, M., Chen, Q., Zhang, Y., Du, F., Hou, H., Chao, F., & Zhang, H. (2014). PET imaging reveals brain functional changes in internet gaming disorder. European Journal of Nuclear Medicine and Molecular Imaging, 41(7), 1388–1397. https://doi.org/10.1007/s00259-014-2708-8.
Vormbrock, R., Bruchmann, M., Menne, L., Straube, T., & Schindler, S. (2023). Testing stimulus exposure time as the critical factor of increased EPN and LPP amplitudes for fearful faces during perceptual distraction tasks. Cortex, 160, 9–23. https://doi.org/10.1016/j.cortex.2022.12.011.
Weinberg, A., & Hajcak, G. (2010). Beyond good and evil: The time-course of neural activity elicited by specific picture content. Emotion, 10(6), 767–782. https://doi.org/10.1037/a0020242.
Widyanto, L., & McMurran, M. (2004). The psychometric properties of the internet addiction test. CyberPsychology & Behavior, 7(4), 443–450. https://doi.org/10.1089/cpb.2004.7.443.
Wieser, M. J., Pauli, P., Reicherts, P., & Mühlberger, A. (2010). Don't look at me in anger! Enhanced processing of angry faces in anticipation of public speaking. Psychophysiology, 47(2), 271–280. https://doi.org/10.1111/j.1469-8986.2009.00938.x.
Wong, H. Y., Mo, H. Y., Potenza, M. N., Chan, M. N. M., Lau, W. M., Chui, T. K., … Lin, C. Y. (2020). Relationships between severity of internet gaming disorder, severity of problematic social media use, sleep quality and psychological distress. International Journal of Environmental Research and Public Health, 17(6), 1879. https://doi.org/10.3390/ijerph17061879.
Wu, T., Lin, C., Årestedt, K., Griffiths, M. D., Broström, A., & Pakpour, A. H. (2017). Psychometric validation of the Persian nine-item Internet Gaming Disorder Scale – Short Form: Does gender and hours spent online gaming affect the interpretations of item descriptions? Journal of Behavioral Addictions, 6(2), 256–263. https://doi.org/10.1556/2006.6.2017.025.
Wu, L., Zhu, L., Shi, X., Zhou, N., Wang, R., Liu, G., … Zhang, J. (2020). Impaired regulation of both addiction-related and primary rewards in individuals with internet gaming disorder. Psychiatry Research, 286, 112892. https://doi.org/10.1016/j.psychres.2020.112892.
Xia, M., Li, X., Ye, C., & Li, H. (2014). The ERPs for the facial expression processing. Advances in Psychological Science, 22(10), 1556. https://doi.org/10.3724/SP.J.1042.2014.01556.
Yam, C., Pakpour, A. H., Griffiths, M. D., Yau, W., Lo, C.-L. M., Ng, J. M. T., … Leung, H. (2019). Psychometric testing of three Chinese online-related addictive behavior instruments among Hong Kong university students. Psychiatric Quarterly, 90(1), 117–128. https://doi.org/10.1007/s11126-018-9610-7.
Yang, T., Yang, Z., Xu, G., Gao, D., Zhang, Z., Wang, H., … Sun, P. (2020). Tsinghua facial expression database–A database of facial expressions in Chinese young and older women and men: Development and validation. Plos One, 15(4), e0231304. https://doi.org/10.1371/journal.pone.0231304.
Young, K. S. (1998). Caught in the net: How to recognize the signs of internet addiction-and a winning strategy for recovery. John Wiley and Sons, Inc., 605 Third Avenue, New York, NY 101580012 ($24.95).
Zhang, Q., Ran, G., & Li, X. (2018). The perception of facial emotional change in social anxiety: An ERP study. Frontiers in Psychology, 9, 1737. https://doi.org/10.3389/fpsyg.2018.01737.
Zhang, Z., Wang, S., Du, X., Qi, Y., Wang, L., & Dong, G. (2023). Brain responses to positive and negative events in individuals with internet gaming disorder during real gaming. Journal of Behavioral Addictions, 12(3), 758–774. https://doi.org/10.1556/2006.2023.00039.
Zhou, Y., Yao, M., Fang, S., & Gao, X. (2022). A dual-process perspective to explore decision making in internet gaming disorder: An ERP study of comparison with recreational game users. Computers in Human Behavior, 128, 107104. https://doi.org/10.1016/j.chb.2021.107104.