论文信息
论文标题:Interpretable Rumor Detection in Microblogs by Attending to User Interactions论文作者:Ling Min Serena Khoo, Hai Leong Chieu, Zhong Qian, Jing Jiang论文来源:2020,论文地址:download 论文代码:downloadBackground基于群体智能的谣言检测:Figure 1
文章插图
本文观点:基于树结构的谣言检测模型,往往忽略了 Branch 之间的交互 。
1 IntroductionMotivation:a user posting a reply might be replying to the entire thread rather than to a specific user.
Mehtod:We propose a post-level attention model (PLAN) to model long distance interactions between tweets with the multi-head attention mechanism in a transformer network.
【PLAN 谣言检测——《Interpretable Rumor Detection in Microblogs by Attending to User Interactions》】We investigated variants of this model:
- a structure aware self-attention model (StA-PLAN) that incorporates tree structure information in the transformer network;
- a hierarchical token and post-level attention model (StA-HiTPLAN) that learns a sentence representation with token-level self-attention.
- We utilize the attention weights from our model to provide both token-level and post-level explanations behind the model’s prediction. To the best of our knowledge, we are the first paper that has done this.
- We compare against previous works on two data sets - PHEME 5 events and Twitter15 and Twitter16 . Previous works only evaluated on one of the two data sets.
- Our proposed models could outperform current state-ofthe-art models for both data sets.
(i) the content of the claim.
(ii) the bias and social network of the source of the claim.
(iii) fact checking with trustworthy sources.
(iv) community response to the claims.
2 Approaches2.1 Recursive Neural Networks观点:谣言传播树通常是浅层的,一个用户通常只回复一次 source post,而后进行早期对话 。
Dataset
Twitter15
Twitter16
PHEME
Tree-depth
2.80
2.77
3.12
2.2 Transformer NetworksTransformer 中的注意机制使有效的远程依赖关系建模成为可能 。
Transformer 中的注意力机制:
$\alpha_{i j}=\operatorname{Compatibility}\left(q_{i}, k_{j}\right)=\operatorname{softmax}\left(\frac{q_{i} k_{j}^{T}}{\sqrt{d_{k}}}\right)\quad\quad\quad(1)$
$z_{i}=\sum_{j=1}^{n} \alpha_{i j} v_{j}\quad\quad\quad(2)$
2.3 Post-Level Attention Network (PLAN)框架如下:
文章插图
首先:将 Post 按时间顺序排列;
其次:对每个 Post 使用 Max pool 得到sentence embedding ;
然后:将 sentence embedding $X^{\prime}=\left(x_{1}^{\prime}, x_{2}^{\prime}, \ldots, x_{n}^{\prime}\right)$ 通过 $s$ 个多头注意力模块 MHA 得到 $U=\left(u_{1}, u_{2}, \ldots, u_{n}\right)$;
最后:通过 attention 机制聚合这些输出并使用全连接层进行预测 :
$\begin{array}{l}\alpha_{k}=\operatorname{softmax}\left(\gamma^{T} u_{k}\right)&\quad\quad\quad(3)\\v=\sum\limits _{k=0}^{m} \alpha_{k} u_{k} &\quad\quad\quad(4)\\p=\operatorname{softmax}\left(W_{p}^{T} v+b_{p}\right) &\quad\quad\quad(5)\end{array}$
文章插图
where $\gamma \in \mathbb{R}^{d_{\text {model }}}, \alpha_{k} \in \mathbb{R}$,$W_{p} \in \mathbb{R}^{d_{\text {model }}, K}$,$b \in \mathbb{R}^{d_{\text {model }}}$,$u_{k}$ is the output after passing through $s$ number of MHA layers,$v$ and $p$ are the representation vector and prediction vector for $X$
经验总结扩展阅读
- .Net CLR GC plan_phase二叉树和Brick_table
- 谣言检测——《Debunking Rumors on Twitter with Tree Transformer》
- 如何检测手机
- 水质检测笔多少为正常
- 翅尖有毒是谣言吗
- 核酸检测阳性怎么办
- 自身 如何在linux下检测IP冲突
- 华为watch3pro支持血糖检测吗_华为watch3pro有测血糖功能吗
- 东莞机动车检测站周末上班吗
- Notebook交互式完成目标检测任务